summaryrefslogtreecommitdiffstats
path: root/drivers/crypto/vmx
Commit message (Collapse)AuthorAgeFilesLines
* crypto: vmx - ghash: do nosimd fallback manuallyDaniel Axtens2019-06-111-129/+89
| | | | | | | | | | | | | | | | | | | | | | | | | | commit 357d065a44cdd77ed5ff35155a989f2a763e96ef upstream. VMX ghash was using a fallback that did not support interleaving simd and nosimd operations, leading to failures in the extended test suite. If I understood correctly, Eric's suggestion was to use the same data format that the generic code uses, allowing us to call into it with the same contexts. I wasn't able to get that to work - I think there's a very different key structure and data layout being used. So instead steal the arm64 approach and perform the fallback operations directly if required. Fixes: cc333cd68dfa ("crypto: vmx - Adding GHASH routines for VMX module") Cc: stable@vger.kernel.org # v4.1+ Reported-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Daniel Axtens <dja@axtens.net> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - CTR: always increment IV as quadwordDaniel Axtens2019-06-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 009b30ac7444c17fae34c4f435ebce8e8e2b3250 upstream. The kernel self-tests picked up an issue with CTR mode: alg: skcipher: p8_aes_ctr encryption test failed (wrong result) on test vector 3, cfg="uneven misaligned splits, may sleep" Test vector 3 has an IV of FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD, so after 3 increments it should wrap around to 0. In the aesp8-ppc code from OpenSSL, there are two paths that increment IVs: the bulk (8 at a time) path, and the individual path which is used when there are fewer than 8 AES blocks to process. In the bulk path, the IV is incremented with vadduqm: "Vector Add Unsigned Quadword Modulo", which does 128-bit addition. In the individual path, however, the IV is incremented with vadduwm: "Vector Add Unsigned Word Modulo", which instead does 4 32-bit additions. Thus the IV would instead become FFFFFFFFFFFFFFFFFFFFFFFF00000000, throwing off the result. Use vadduqm. This was probably a typo originally, what with q and w being adjacent. It is a pretty narrow edge case: I am really impressed by the quality of the kernel self-tests! Fixes: 5c380d623ed3 ("crypto: vmx - Add support for VMS instructions by ASM") Cc: stable@vger.kernel.org Signed-off-by: Daniel Axtens <dja@axtens.net> Acked-by: Nayna Jain <nayna@linux.ibm.com> Tested-by: Nayna Jain <nayna@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - fix copy-paste error in CTR modeDaniel Axtens2019-06-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | commit dcf7b48212c0fab7df69e84fab22d6cb7c8c0fb9 upstream. The original assembly imported from OpenSSL has two copy-paste errors in handling CTR mode. When dealing with a 2 or 3 block tail, the code branches to the CBC decryption exit path, rather than to the CTR exit path. This leads to corruption of the IV, which leads to subsequent blocks being corrupted. This can be detected with libkcapi test suite, which is available at https://github.com/smuellerDD/libkcapi Reported-by: Ondrej Mosnáček <omosnacek@gmail.com> Fixes: 5c380d623ed3 ("crypto: vmx - Add support for VMS instructions by ASM") Cc: stable@vger.kernel.org Signed-off-by: Daniel Axtens <dja@axtens.net> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - Fix sleep-in-atomic bugsOndrej Mosnacek2018-09-191-16/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 0522236d4f9c5ab2e79889cb020d1acbe5da416e upstream. This patch fixes sleep-in-atomic bugs in AES-CBC and AES-XTS VMX implementations. The problem is that the blkcipher_* functions should not be called in atomic context. The bugs can be reproduced via the AF_ALG interface by trying to encrypt/decrypt sufficiently large buffers (at least 64 KiB) using the VMX implementations of 'cbc(aes)' or 'xts(aes)'. Such operations then trigger BUG in crypto_yield(): [ 891.863680] BUG: sleeping function called from invalid context at include/crypto/algapi.h:424 [ 891.864622] in_atomic(): 1, irqs_disabled(): 0, pid: 12347, name: kcapi-enc [ 891.864739] 1 lock held by kcapi-enc/12347: [ 891.864811] #0: 00000000f5d42c46 (sk_lock-AF_ALG){+.+.}, at: skcipher_recvmsg+0x50/0x530 [ 891.865076] CPU: 5 PID: 12347 Comm: kcapi-enc Not tainted 4.19.0-0.rc0.git3.1.fc30.ppc64le #1 [ 891.865251] Call Trace: [ 891.865340] [c0000003387578c0] [c000000000d67ea4] dump_stack+0xe8/0x164 (unreliable) [ 891.865511] [c000000338757910] [c000000000172a58] ___might_sleep+0x2f8/0x310 [ 891.865679] [c000000338757990] [c0000000006bff74] blkcipher_walk_done+0x374/0x4a0 [ 891.865825] [c0000003387579e0] [d000000007e73e70] p8_aes_cbc_encrypt+0x1c8/0x260 [vmx_crypto] [ 891.865993] [c000000338757ad0] [c0000000006c0ee0] skcipher_encrypt_blkcipher+0x60/0x80 [ 891.866128] [c000000338757b10] [c0000000006ec504] skcipher_recvmsg+0x424/0x530 [ 891.866283] [c000000338757bd0] [c000000000b00654] sock_recvmsg+0x74/0xa0 [ 891.866403] [c000000338757c10] [c000000000b00f64] ___sys_recvmsg+0xf4/0x2f0 [ 891.866515] [c000000338757d90] [c000000000b02bb8] __sys_recvmsg+0x68/0xe0 [ 891.866631] [c000000338757e30] [c00000000000bbe4] system_call+0x5c/0x70 Fixes: 8c755ace357c ("crypto: vmx - Adding CBC routines for VMX module") Fixes: c07f5d3da643 ("crypto: vmx - Adding support for XTS") Cc: stable@vger.kernel.org Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - Remove overly verbose printk from AES init routinesMichael Ellerman2018-06-164-8/+0
| | | | | | | | | | | | | | | | | | | | | | | commit 1411b5218adbcf1d45ddb260db5553c52e8d917c upstream. In the vmx AES init routines we do a printk(KERN_INFO ...) to report the fallback implementation we're using. However with a slow console this can significantly affect the speed of crypto operations. Using 'cryptsetup benchmark' the removal of the printk() leads to a ~5x speedup for aes-cbc decryption. So remove them. Fixes: 8676590a1593 ("crypto: vmx - Adding AES routines for VMX module") Fixes: 8c755ace357c ("crypto: vmx - Adding CBC routines for VMX module") Fixes: 4f7f60d312b3 ("crypto: vmx - Adding CTR routines for VMX module") Fixes: cc333cd68dfa ("crypto: vmx - Adding GHASH routines for VMX module") Cc: stable@vger.kernel.org # v4.1+ Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - disable preemption to enable vsx in aes_ctr.cLi Zhong2017-11-151-0/+6
| | | | | | | | | | | | | [ Upstream commit 7dede913fc2ab9c0d3bff3a49e26fa9e858b0c13 ] Some preemptible check warnings were reported from enable_kernel_vsx(). This patch disables preemption in aes_ctr.c before enabling vsx, and they are now consistent with other files in the same directory. Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - Fix memory corruption caused by p8_ghashMarcelo Cerri2016-10-221-15/+16
| | | | | | | | | | | | | | | | commit 80da44c29d997e28c4442825f35f4ac339813877 upstream. This patch changes the p8_ghash driver to use ghash-generic as a fixed fallback implementation. This allows the correct value of descsize to be defined directly in its shash_alg structure and avoids problems with incorrect buffer sizes when its state is exported or imported. Reported-by: Jan Stancek <jstancek@redhat.com> Fixes: cc333cd68dfa ("crypto: vmx - Adding GHASH routines for VMX module") Signed-off-by: Marcelo Cerri <marcelo.cerri@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - IV size failing on skcipher APILeonidas Da Silva Barbosa2016-09-152-2/+2
| | | | | | | | | | | | | | | [ Upstream commit 0d3d054b43719ef33232677ba27ba6097afdafbc ] IV size was zero on CBC and CTR modes, causing a bug triggered by skcipher. Fixing this adding a correct size. Signed-off-by: Leonidas Da Silva Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Paulo Smorigo <pfsmorigo@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - Fix ABI detectionAnton Blanchard2016-09-151-1/+1
| | | | | | | | | | | | | | | | | | [ Upstream commit 975f57fdff1d0eb9816806cabd27162a8a1a4038 ] When calling ppc-xlate.pl, we pass it either linux-ppc64 or linux-ppc64le. The script however was expecting linux64le, a result of its OpenSSL origins. This means we aren't obeying the ppc64le ABIv2 rules. Fix this by checking for linux-ppc64le. Fixes: 5ca55738201c ("crypto: vmx - comply with ABIs that specify vrsave as reserved.") Cc: stable@vger.kernel.org Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - comply with ABIs that specify vrsave as reserved.Paulo Flabiano Smorigo2016-09-151-0/+20
| | | | | | | | | | | | | | [ Upstream commit 5ca55738201c7ae1b556ad87bbb22c139ecc01dd ] It gives significant improvements ( ~+15%) on some modes. These code has been adopted from OpenSSL project in collaboration with the original author (Andy Polyakov <appro@openssl.org>). Signed-off-by: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - Increase priority of aes-cbc cipherAnton Blanchard2016-07-112-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | commit 12d3f49e1ffbbf8cbbb60acae5a21103c5c841ac upstream. All of the VMX AES ciphers (AES, AES-CBC and AES-CTR) are set at priority 1000. Unfortunately this means we never use AES-CBC and AES-CTR, because the base AES-CBC cipher that is implemented on top of AES inherits its priority. To fix this, AES-CBC and AES-CTR have to be a higher priority. Set them to 2000. Testing on a POWER8 with: cryptsetup benchmark --cipher aes --key-size 256 Shows decryption speed increase from 402.4 MB/s to 3069.2 MB/s, over 7x faster. Thanks to Mike Strosaker for helping me debug this issue. Fixes: 8c755ace357c ("crypto: vmx - Adding CBC routines for VMX module") Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmx - Fixing opcode issueLeonidas Da Silva Barbosa2015-08-241-0/+1
| | | | | | | | | | In build time vadduqm opcode is not being mapped correctly. Adding a new map in ppc-xlate to do this. Signed-off-by: Leonidas S Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmx - Fixing GHASH Key issue on little endianLeonidas Da Silva Barbosa2015-08-181-0/+6
| | | | | | | | | | | | | | | GHASH table algorithm is using a big endian key. In little endian machines key will be LE ordered. After a lxvd2x instruction key is loaded as it is, LE/BE order, in first case it'll generate a wrong table resulting in wrong hashes from the algorithm. Bug affects only LE machines. In order to fix it we do a swap for loaded key. Cc: stable@vger.kernel.org Signed-off-by: Leonidas S Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmx - Fixing AES-CTR counter bugLeonidas Da Silva Barbosa2015-08-182-18/+24
| | | | | | | | | | | | | AES-CTR is using a counter 8bytes-8bytes what miss match with kernel specs. In the previous code a vadduwm was done to increment counter. Replacing this for a vadduqm now considering both cases counter 8-8 bytes and full 16bytes. Cc: stable@vger.kernel.org Signed-off-by: Leonidas S Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmx - Adding enable_kernel_vsx() to access VSX instructionsLeonidas Da Silva Barbosa2015-07-144-0/+13
| | | | | | | | | | | | vmx-crypto driver make use of some VSX instructions which are only available if VSX is enabled. Running in cases where VSX are not enabled vmx-crypto fails in a VSX exception. In order to fix this enable_kernel_vsx() was added to turn on VSX instructions for vmx-crypto. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds2015-06-228-508/+532
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull crypto update from Herbert Xu: "Here is the crypto update for 4.2: API: - Convert RNG interface to new style. - New AEAD interface with one SG list for AD and plain/cipher text. All external AEAD users have been converted. - New asymmetric key interface (akcipher). Algorithms: - Chacha20, Poly1305 and RFC7539 support. - New RSA implementation. - Jitter RNG. - DRBG is now seeded with both /dev/random and Jitter RNG. If kernel pool isn't ready then DRBG will be reseeded when it is. - DRBG is now the default crypto API RNG, replacing krng. - 842 compression (previously part of powerpc nx driver). Drivers: - Accelerated SHA-512 for arm64. - New Marvell CESA driver that supports DMA and more algorithms. - Updated powerpc nx 842 support. - Added support for SEC1 hardware to talitos" * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (292 commits) crypto: marvell/cesa - remove COMPILE_TEST dependency crypto: algif_aead - Temporarily disable all AEAD algorithms crypto: af_alg - Forbid the use internal algorithms crypto: echainiv - Only hold RNG during initialisation crypto: seqiv - Add compatibility support without RNG crypto: eseqiv - Offer normal cipher functionality without RNG crypto: chainiv - Offer normal cipher functionality without RNG crypto: user - Add CRYPTO_MSG_DELRNG crypto: user - Move cryptouser.h to uapi crypto: rng - Do not free default RNG when it becomes unused crypto: skcipher - Allow givencrypt to be NULL crypto: sahara - propagate the error on clk_disable_unprepare() failure crypto: rsa - fix invalid select for AKCIPHER crypto: picoxcell - Update to the current clk API crypto: nx - Check for bogus firmware properties crypto: marvell/cesa - add DT bindings documentation crypto: marvell/cesa - add support for Kirkwood and Dove SoCs crypto: marvell/cesa - add support for Orion SoCs crypto: marvell/cesa - add allhwsupport module parameter crypto: marvell/cesa - add support for all armada SoCs ...
| * crypto: vmx - Reindent to kernel styleHerbert Xu2015-06-166-482/+506
| | | | | | | | | | | | This patch reidents the vmx code-base to the kernel coding style. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: vmx - Remove duplicate PPC64 dependencyHerbert Xu2015-06-161-1/+1
| | | | | | | | | | | | | | | | | | The top-level CRYPTO_DEV_VMX option already depends on PPC64 so there is no need to depend on it again at CRYPTO_DEV_VMX_ENCRYPT. This patch also removes a redundant "default n". Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: vmx - fix two mistyped textsPaulo Flabiano Smorigo2015-05-152-2/+2
| | | | | | | | | | | | | | One mistyped description and another mistyped target were corrected. Signed-off-by: Paulo Flabiano Smorigo <pfsmorigo@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | sched/preempt, powerpc: Disable preemption in enable_kernel_altivec() explicitlyDavid Hildenbrand2015-05-193-1/+21
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | enable_kernel_altivec() has to be called with disabled preemption. Let's make this explicit, to prepare for pagefault_disable() not touching preemption anymore. Reviewed-and-tested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David.Laight@ACULAB.COM Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: airlied@linux.ie Cc: akpm@linux-foundation.org Cc: bigeasy@linutronix.de Cc: borntraeger@de.ibm.com Cc: daniel.vetter@intel.com Cc: heiko.carstens@de.ibm.com Cc: herbert@gondor.apana.org.au Cc: hocko@suse.cz Cc: hughd@google.com Cc: mst@redhat.com Cc: paulus@samba.org Cc: ralf@linux-mips.org Cc: schwidefsky@de.ibm.com Cc: yang.shi@windriver.com Link: http://lkml.kernel.org/r/1431359540-32227-14-git-send-email-dahi@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* linux-next: Tree for Mar 11 (powerpc build failure due to vmx crypto code)Herbert Xu2015-03-123-40/+5
| | | | | | | | | | | crypto: vmx - Fix assembler perl to use _GLOBAL Rather than doing things by hand for global symbols to deal with different calling conventions we already have a macro _GLOBAL in Linux to handle this. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Guenter Roeck <linux@roeck-us.net>
* crypto: vmx - Enabling VMX module for PPC64Leonidas S. Barbosa2015-02-282-0/+27
| | | | | | | This patch enables VMX module in PPC64. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmx - Add support for VMS instructions by ASMLeonidas S. Barbosa2015-02-283-0/+2400
| | | | | | | | | | | | | | | | | | OpenSSL implements optimized ASM algorithms which support VMX instructions on Power 8 CPU. These scripts generate an endian-agnostic ASM implementation in order to support both big and little-endian. - aesp8-ppc.pl: implements suport for AES instructions implemented by POWER8 processor. - ghashp8-ppc.pl: implements support for GHASH for Power8. - ppc-xlate.pl: ppc assembler distiller. These code has been adopted from OpenSSL project in collaboration with the original author (Andy Polyakov <appro@openssl.org>). Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmx - Adding GHASH routines for VMX moduleMarcelo H. Cerri2015-02-281-0/+214
| | | | | | | | | This patch adds GHASH routines to VMX module in order to make use of VMX cryptographic acceleration instructions on Power 8 CPU. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmx - Adding CTR routines for VMX moduleMarcelo H. Cerri2015-02-281-0/+167
| | | | | | | | | This patch adds AES CTR routines to VMX module in order to make use of VMX cryptographic acceleration instructions on Power 8 CPU. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmx - Adding CBC routines for VMX moduleMarcelo H. Cerri2015-02-281-0/+184
| | | | | | | | | This patch adds AES CBC routines to VMX module in order to make use of VMX cryptographic acceleration instructions on Power 8 CPU. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmx - Adding AES routines for VMX moduleMarcelo H. Cerri2015-02-282-0/+159
| | | | | | | | | This patch adds AES routines to VMX module in order to make use of VMX cryptographic acceleration instructions on Power 8 CPU. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmx - Adding VMX module for Power 8Marcelo H. Cerri2015-02-281-0/+88
This patch adds routines supporting VMX instructions on the Power 8. Signed-off-by: Leonidas S. Barbosa <leosilva@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>