summaryrefslogtreecommitdiffstats
path: root/crypto/gcm.c
Commit message (Collapse)AuthorAgeFilesLines
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner2019-06-191-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: gcm - fix incompatibility between "gcm" and "gcm_base"Eric Biggers2019-04-191-23/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: run initcalls for generic implementations earlierEric Biggers2019-04-181-1/+1
| | | | | | | | | | | | | | | | | | | | Use subsys_initcall for registration of all templates and generic algorithm implementations, rather than module_init. Then change cryptomgr to use arch_initcall, to place it before the subsys_initcalls. This is needed so that when both a generic and optimized implementation of an algorithm are built into the kernel (not loadable modules), the generic implementation is registered before the optimized one. Otherwise, the self-tests for the optimized implementation are unable to allocate the generic implementation for the new comparison fuzz tests. Note that on arm, a side effect of this change is that self-tests for generic implementations may run before the unaligned access handler has been installed. So, unaligned accesses will crash the kernel. This is arguably a good thing as it makes it easier to detect that type of bug. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - use template array registering API to simplify the codeXiongfeng Wang2019-01-251-50/+23
| | | | | | | | Use crypto template array registering API to simplify the code. Signed-off-by: Xiongfeng Wang <xiongfeng.wang@linaro.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - use correct endianness type in gcm_hash_len()Eric Biggers2019-01-181-1/+1
| | | | | | | | | | | | | | | | | In gcm_hash_len(), use be128 rather than u128. This fixes the following sparse warnings: crypto/gcm.c:252:19: warning: incorrect type in assignment (different base types) crypto/gcm.c:252:19: expected unsigned long long [usertype] a crypto/gcm.c:252:19: got restricted __be64 [usertype] crypto/gcm.c:253:19: warning: incorrect type in assignment (different base types) crypto/gcm.c:253:19: expected unsigned long long [usertype] b crypto/gcm.c:253:19: got restricted __be64 [usertype] No actual change in behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: null - Remove VLA usage of skcipherKees Cook2018-09-281-4/+4
| | | | | | | | | | | | In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: null - Get rid of crypto_{get,put}_default_null_skcipher2()Eric Biggers2017-12-221-2/+2
| | | | | | | | | | | Since commit 499a66e6b689 ("crypto: null - Remove default null blkcipher"), crypto_get_default_null_skcipher2() and crypto_put_default_null_skcipher2() are the same as their non-2 equivalents. So switch callers of the "2" versions over to the original versions and remove the "2" versions. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - move to generic async completionGilad Ben-Yossef2017-11-031-26/+6
| | | | | | | | gcm is starting an async. crypto op and waiting for it complete. Move it over to generic code doing the same. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Use GCM IV size constantCorentin LABBE2017-09-221-11/+12
| | | | | | | This patch replace GCM IV size value by their constant name. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - wait for crypto op not signal safeGilad Ben-Yossef2017-05-231-4/+2
| | | | | | | | | | | | | | crypto_gcm_setkey() was using wait_for_completion_interruptible() to wait for completion of async crypto op but if a signal occurs it may return before DMA ops of HW crypto provider finish, thus corrupting the data buffer that is kfree'ed in this case. Resolve this by using wait_for_completion() instead. Reported-by: Eric Biggers <ebiggers3@gmail.com> Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> CC: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - Get rid of crypto_spawn_skcipher2()Eric Biggers2016-11-011-1/+1
| | | | | | | | | | Since commit 3a01d0ee2b99 ("crypto: skcipher - Remove top-level givcipher interface"), crypto_spawn_skcipher2() and crypto_spawn_skcipher() are equivalent. So switch callers of crypto_spawn_skcipher2() to crypto_spawn_skcipher() and remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - Get rid of crypto_grab_skcipher2()Eric Biggers2016-11-011-3/+3
| | | | | | | | | | Since commit 3a01d0ee2b99 ("crypto: skcipher - Remove top-level givcipher interface"), crypto_grab_skcipher2() and crypto_grab_skcipher() are equivalent. So switch callers of crypto_grab_skcipher2() to crypto_grab_skcipher() and remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Fix error return code in crypto_gcm_create_common()Wei Yongjun2016-10-251-1/+1
| | | | | | | | Fix to return error code -EINVAL from the invalid alg ivsize error handling case instead of 0, as done elsewhere in this function. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Fix IV buffer size in crypto_gcm_setkeyOndrej Mosnáček2016-10-021-1/+1
| | | | | | | | | | | | The cipher block size for GCM is 16 bytes, and thus the CTR transform used in crypto_gcm_setkey() will also expect a 16-byte IV. However, the code currently reserves only 8 bytes for the IV, causing an out-of-bounds access in the CTR transform. This patch fixes the issue by setting the size of the IV buffer to 16 bytes. Fixes: 84c911523020 ("[CRYPTO] gcm: Add support for async ciphers") Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Use skcipherHerbert Xu2016-07-181-53/+58
| | | | | | | This patch converts gcm to use the new skcipher interface as opposed to ablkcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Filter out async ghash if necessaryHerbert Xu2016-06-201-1/+3
| | | | | | | | | | | | As it is if you ask for a sync gcm you may actually end up with an async one because it does not filter out async implementations of ghash. This patch fixes this by adding the necessary filter when looking for ghash. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'for-4.3/sg' of git://git.kernel.dk/linux-blockLinus Torvalds2015-09-021-2/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull SG updates from Jens Axboe: "This contains a set of scatter-gather related changes/fixes for 4.3: - Add support for limited chaining of sg tables even for architectures that do not set ARCH_HAS_SG_CHAIN. From Christoph. - Add sg chain support to target_rd. From Christoph. - Fixup open coded sg->page_link in crypto/omap-sham. From Christoph. - Fixup open coded crypto ->page_link manipulation. From Dan. - Also from Dan, automated fixup of manual sg_unmark_end() manipulations. - Also from Dan, automated fixup of open coded sg_phys() implementations. - From Robert Jarzmik, addition of an sg table splitting helper that drivers can use" * 'for-4.3/sg' of git://git.kernel.dk/linux-block: lib: scatterlist: add sg splitting function scatterlist: use sg_phys() crypto/omap-sham: remove an open coded access to ->page_link scatterlist: remove open coded sg_unmark_end instances crypto: replace scatterwalk_sg_chain with sg_chain target/rd: always chain S/G list scatterlist: allow limited chaining without ARCH_HAS_SG_CHAIN
| * crypto: replace scatterwalk_sg_chain with sg_chainDan Williams2015-08-171-2/+2
| | | | | | | | | | | | | | | | Signed-off-by: Dan Williams <dan.j.williams@intel.com> [hch: split from a larger patch by Dan] Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jens Axboe <axboe@fb.com>
* | crypto: aead - Remove CRYPTO_ALG_AEAD_NEW flagHerbert Xu2015-08-171-9/+3
| | | | | | | | | | | | | | This patch removes the CRYPTO_ALG_AEAD_NEW flag now that everyone has been converted. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: gcm - Use new IV conventionHerbert Xu2015-07-141-37/+77
|/ | | | | | | | This patch converts rfc4106 to the new calling convention where the IV is now part of the AD and needs to be skipped. This patch also makes use of the new type-safe way of freeing instances. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Convert to new AEAD interfaceHerbert Xu2015-06-171-532/+373
| | | | | | | | | This patch converts generic gcm and its associated transforms to the new AEAD interface. The biggest reward is in code reduction for rfc4543 where it used to do IV stitching which is no longer needed as the IV is already part of the AD on input. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Use default null skcipherHerbert Xu2015-05-221-17/+6
| | | | | | | This patch makes gcm use the default null skcipher instead of allocating a new one for each tfm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Use crypto_aead_set_reqsize helperHerbert Xu2015-05-131-11/+11
| | | | | | | This patch uses the crypto_aead_set_reqsize helper to avoid directly touching the internals of aead. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: include crypto- module prefix in templateKees Cook2014-11-261-0/+1
| | | | | | | | | | | | | | | | | | | This adds the module loading prefix "crypto-" to the template lookup as well. For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly includes the "crypto-" prefix at every level, correctly rejecting "vfat": net-pf-38 algif-hash crypto-vfat(blowfish) crypto-vfat(blowfish)-all crypto-vfat Reported-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: prefix module autoloading with "crypto-"Kees Cook2014-11-241-3/+3
| | | | | | | | | | | This prefixes all crypto module loading with "crypto-" so we never run the risk of exposing module auto-loading to userspace via a crypto API, as demonstrated by Mathias Krause: https://lkml.org/lkml/2013/3/4/70 Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Resolve shadow warningsMark Rustad2014-08-011-15/+15
| | | | | | | | | Change formal parameters to not clash with global names to eliminate many W=2 warnings. Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: crypto_memneq - add equality testing of memory regions w/o timing leaksJames Yonan2013-10-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When comparing MAC hashes, AEAD authentication tags, or other hash values in the context of authentication or integrity checking, it is important not to leak timing information to a potential attacker, i.e. when communication happens over a network. Bytewise memory comparisons (such as memcmp) are usually optimized so that they return a nonzero value as soon as a mismatch is found. E.g, on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch and up to ~850 cyc for a full match (cold). This early-return behavior can leak timing information as a side channel, allowing an attacker to iteratively guess the correct result. This patch adds a new method crypto_memneq ("memory not equal to each other") to the crypto API that compares memory areas of the same length in roughly "constant time" (cache misses could change the timing, but since they don't reveal information about the content of the strings being compared, they are effectively benign). Iow, best and worst case behaviour take the same amount of time to complete (in contrast to memcmp). Note that crypto_memneq (unlike memcmp) can only be used to test for equality or inequality, NOT for lexicographical order. This, however, is not an issue for its use-cases within the crypto API. We tried to locate all of the places in the crypto API where memcmp was being used for authentication or integrity checking, and convert them over to crypto_memneq. crypto_memneq is declared noinline, placed in its own source file, and compiled with optimizations that might increase code size disabled ("Os") because a smart compiler (or LTO) might notice that the return value is always compared against zero/nonzero, and might then reintroduce the same early-return optimization that we are trying to avoid. Using #pragma or __attribute__ optimization annotations of the code for disabling optimization was avoided as it seems to be considered broken or unmaintained for long time in GCC [1]. Therefore, we work around that by specifying the compile flag for memneq.o directly in the Makefile. We found that this seems to be most appropriate. As we use ("Os"), this patch also provides a loop-free "fast-path" for frequently used 16 byte digests. Similarly to kernel library string functions, leave an option for future even further optimized architecture specific assembler implementations. This was a joint work of James Yonan and Daniel Borkmann. Also thanks for feedback from Florian Weimer on this and earlier proposals [2]. [1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html [2] https://lkml.org/lkml/2013/2/10/131 Signed-off-by: James Yonan <james@openvpn.net> Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Florian Weimer <fw@deneb.enyo.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds2013-05-021-19/+97
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull crypto update from Herbert Xu: - XTS mode optimisation for twofish/cast6/camellia/aes on x86 - AVX2/x86_64 implementation for blowfish/twofish/serpent/camellia - SSSE3/AVX/AVX2 optimisations for sha256/sha512 - Added driver for SAHARA2 crypto accelerator - Fix for GMAC when used in non-IPsec secnarios - Added generic CMAC implementation (including IPsec glue) - IP update for crypto/atmel - Support for more than one device in hwrng/timeriomem - Added Broadcom BCM2835 RNG driver - Misc fixes * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (59 commits) crypto: caam - fix job ring cleanup code crypto: camellia - add AVX2/AES-NI/x86_64 assembler implementation of camellia cipher crypto: serpent - add AVX2/x86_64 assembler implementation of serpent cipher crypto: twofish - add AVX2/x86_64 assembler implementation of twofish cipher crypto: blowfish - add AVX2/x86_64 implementation of blowfish cipher crypto: tcrypt - add async cipher speed tests for blowfish crypto: testmgr - extend camellia test-vectors for camellia-aesni/avx2 crypto: aesni_intel - fix Kconfig problem with CRYPTO_GLUE_HELPER_X86 crypto: aesni_intel - add more optimized XTS mode for x86-64 crypto: x86/camellia-aesni-avx - add more optimized XTS code crypto: cast6-avx: use new optimized XTS code crypto: x86/twofish-avx - use optimized XTS code crypto: x86 - add more optimized XTS-mode for serpent-avx xfrm: add rfc4494 AES-CMAC-96 support crypto: add CMAC support to CryptoAPI crypto: testmgr - add empty test vectors for null ciphers crypto: testmgr - add AES GMAC test vectors crypto: gcm - fix rfc4543 to handle async crypto correctly crypto: gcm - make GMAC work when dst and src are different hwrng: timeriomem - added devicetree hooks ...
| * crypto: gcm - fix rfc4543 to handle async crypto correctlyJussi Kivilinna2013-04-251-2/+17
| | | | | | | | | | | | | | | | | | If the gcm cipher used by rfc4543 does not complete request immediately, the authentication tag is not copied to destination buffer. Patch adds correct async logic for this case. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: gcm - make GMAC work when dst and src are differentJussi Kivilinna2013-04-251-17/+80
| | | | | | | | | | | | | | | | | | | | The GMAC code assumes that dst==src, which causes problems when trying to add rfc4543(gcm(aes)) test vectors. So fix this code to work when source and destination buffer are different. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: gcm - fix assumption that assoc has one segmentJussi Kivilinna2013-04-021-3/+14
|/ | | | | | | | | | | | | rfc4543(gcm(*)) code for GMAC assumes that assoc scatterlist always contains only one segment and only makes use of this first segment. However ipsec passes assoc with three segments when using 'extended sequence number' thus in this case rfc4543(gcm(*)) fails to function correctly. Patch fixes this issue. Reported-by: Chaoxing Lin <Chaoxing.Lin@ultra-3eti.com> Tested-by: Chaoxing Lin <Chaoxing.Lin@ultra-3eti.com> Cc: stable@vger.kernel.org Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: use ERR_CASTJulia Lawall2013-02-041-20/+9
| | | | | | | | | | | | | | | | | | | | Replace PTR_ERR followed by ERR_PTR by ERR_CAST, to be more concise. The semantic patch that makes this change is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression err,x; @@ - err = PTR_ERR(x); if (IS_ERR(x)) - return ERR_PTR(err); + return ERR_CAST(x); // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Use scatterwalk_crypto_chainSteffen Klassert2010-12-021-17/+2
| | | | | | | Use scatterwalk_crypto_chain in favor of locally defined chaining functions. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Add RFC4543 wrapper for GCMTobias Brunner2010-01-171-0/+287
| | | | | | | | | This patch adds the RFC4543 (GMAC) wrapper for GCM similar to the existing RFC4106 wrapper. The main differences between GCM and GMAC are the contents of the AAD and that the plaintext is empty for the latter. Signed-off-by: Tobias Brunner <tobias@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - fix another complete call in complete fuctionHuang Ying2009-11-161-34/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The flow of the complete function (xxx_done) in gcm.c is as follow: void complete(struct crypto_async_request *areq, int err) { struct aead_request *req = areq->data; if (!err) { err = async_next_step(); if (err == -EINPROGRESS || err == -EBUSY) return; } complete_for_next_step(areq, err); } But *areq may be destroyed in async_next_step(), this makes complete_for_next_step() can not work properly. To fix this, one of following methods is used for each complete function. - Add a __complete() for each complete(), which accept struct aead_request *req instead of areq, so avoid using areq after it is destroyed. - Expand complete_for_next_step(). The fixing method is based on the idea of Herbert Xu. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - Use GHASH digest algorithmHuang Ying2009-08-061-173/+407
| | | | | | | | | | | | Remove the dedicated GHASH implementation in GCM, and uses the GHASH digest algorithm instead. This will make GCM uses hardware accelerated GHASH implementation automatically if available. ahash instead of shash interface is used, because some hardware accelerated GHASH implementation needs asynchronous interface. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] gcm: Introduce rfc4106Herbert Xu2008-01-111-0/+239
| | | | | | | | This patch introduces the rfc4106 wrapper for GCM just as we have an rfc4309 wrapper for CCM. The purpose of the wrapper is to include part of the IV in the key so that it can be negotiated by IPsec. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] gcm: Use crypto_grab_skcipherHerbert Xu2008-01-111-26/+23
| | | | | | | This patch converts the gcm algorithm over to crypto_grab_skcipher which is a prerequisite for IV generation. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] gcm: Allow block cipher parameterHerbert Xu2008-01-111-24/+89
| | | | | | | | | | | This patch adds the gcm_base template which takes a block cipher parameter instead of cipher. This allows the user to specify a specific CTR implementation. This also fixes a leak of the cipher algorithm that was previously looked up but never freed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] gcm: Add support for async ciphersHerbert Xu2008-01-111-78/+112
| | | | | | | | This patch adds the necessary changes for GCM to be used with async ciphers. This would allow it to be used with hardware devices that support CTR. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] ctr: Refactor into ctr and rfc3686Herbert Xu2008-01-111-4/+3
| | | | | | | | | | | | As discussed previously, this patch moves the basic CTR functionality into a chainable algorithm called ctr. The IPsec-specific variant of it is now placed on top with the name rfc3686. So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec variant will be called rfc3686(ctr(aes)). This patch also adjusts gcm accordingly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] gcm: Fix request context alignmentHerbert Xu2008-01-111-7/+14
| | | | | | | This patch fixes the request context alignment so that it is actually aligned to the value required by the algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] gcm: Put abreq in private context instead of on stackHerbert Xu2008-01-111-7/+11
| | | | | | | | | The abreq structure is currently allocated on the stack. This is broken if the underlying algorithm is asynchronous. This patch changes it so that it's taken from the private context instead which has been enlarged accordingly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] scatterwalk: Restore custom sg chaining for nowHerbert Xu2008-01-111-1/+1
| | | | | | | | Unfortunately the generic chaining hasn't been ported to all architectures yet, and notably not s390. So this patch restores the chainging that we've been using previously which does work everywhere. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] scatterwalk: Move scatterwalk.h to linux/cryptoHerbert Xu2008-01-111-1/+2
| | | | | | | | The scatterwalk infrastructure is used by algorithms so it needs to move out of crypto for future users that may live in drivers/crypto or asm/*/crypto. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] aead: Return EBADMSG for ICV mismatchHerbert Xu2008-01-111-1/+1
| | | | | | | This patch changes gcm/authenc to return EBADMSG instead of EINVAL for ICV mismatches. This convention has already been adopted by IPsec. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] gcm: Fix ICV handlingHerbert Xu2008-01-111-28/+40
| | | | | | | | | | | | | The crypto_aead convention for ICVs is to include it directly in the output. If we decided to change this in future then we would make the ICV (if the algorithm has an explicit one) available in the request itself. For now no algorithm needs this so this patch changes gcm to conform to this convention. It also adjusts the tcrypt aead tests to take this into account. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] aead: Make authsize a run-time parameterHerbert Xu2008-01-111-1/+1
| | | | | | | | | | | | | | | As it is authsize is an algorithm paramter which cannot be changed at run-time. This is inconvenient because hardware that implements such algorithms would have to register each authsize that they support separately. Since authsize is a property common to all AEAD algorithms, we can add a function setauthsize that sets it at run-time, just like setkey. This patch does exactly that and also changes authenc so that authsize is no longer a parameter of its template. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] gcm: New algorithmMikko Herranen2008-01-111-0/+465
Add GCM/GMAC support to cryptoapi. GCM (Galois/Counter Mode) is an AEAD mode of operations for any block cipher with a block size of 16. The typical example is AES-GCM. Signed-off-by: Mikko Herranen <mh1@iki.fi> Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>