summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* crypto: ccm - switch to separate cbcmac driverArd Biesheuvel2017-02-112-137/+245
| | | | | | | | | | | | | | | | | Update the generic CCM driver to defer CBC-MAC processing to a dedicated CBC-MAC ahash transform rather than open coding this transform (and much of the associated scatterwalk plumbing) in the CCM driver itself. This cleans up the code considerably, but more importantly, it allows the use of alternative CBC-MAC implementations that don't suffer from performance degradation due to significant setup time (e.g., the NEON based AES code needs to enable/disable the NEON, and load the S-box into 16 SIMD registers, which cannot be amortized over the entire input when using the cipher interface) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - add test cases for cbcmac(aes)Ard Biesheuvel2017-02-112-0/+67
| | | | | | | | | In preparation of splitting off the CBC-MAC transform in the CCM driver into a separate algorithm, define some test cases for the AES incarnation of cbcmac. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aes - add generic time invariant AES cipherArd Biesheuvel2017-02-113-0/+393
| | | | | | | | | | | | | | | | | | | | | | | | | | Lookup table based AES is sensitive to timing attacks, which is due to the fact that such table lookups are data dependent, and the fact that 8 KB worth of tables covers a significant number of cachelines on any architecture, resulting in an exploitable correlation between the key and the processing time for known plaintexts. For network facing algorithms such as CTR, CCM or GCM, this presents a security risk, which is why arch specific AES ports are typically time invariant, either through the use of special instructions, or by using SIMD algorithms that don't rely on table lookups. For generic code, this is difficult to achieve without losing too much performance, but we can improve the situation significantly by switching to an implementation that only needs 256 bytes of table data (the actual S-box itself), which can be prefetched at the start of each block to eliminate data dependent latencies. This code encrypts at ~25 cycles per byte on ARM Cortex-A57 (while the ordinary generic AES driver manages 18 cycles per byte on this hardware). Decryption is substantially slower. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aes-generic - drop alignment requirementArd Biesheuvel2017-02-111-32/+32
| | | | | | | | | | | | | | | The generic AES code exposes a 32-bit align mask, which forces all users of the code to use temporary buffers or take other measures to ensure the alignment requirement is adhered to, even on architectures that don't care about alignment for software algorithms such as this one. So drop the align mask, and fix the code to use get_unaligned_le32() where appropriate, which will resolve to whatever is optimal for the architecture. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha512-mb - Protect sha512 mb ctx mgr accessTim Chen2017-02-111-22/+42
| | | | | | | | | | The flusher and regular multi-buffer computation via mcryptd may race with another. Add here a lock and turn off interrupt to to access multi-buffer computation state cstate->mgr before a round of computation. This should prevent the flusher code jumping in. Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/crc32 - merge CRC32 and PMULL instruction based driversArd Biesheuvel2017-02-115-312/+41
| | | | | | | | | | | | | | | | The PMULL based CRC32 implementation already contains code based on the separate, optional CRC32 instructions to fallback to when operating on small quantities of data. We can expose these routines directly on systems that lack the 64x64 PMULL instructions but do implement the CRC32 ones, which makes the driver that is based solely on those CRC32 instructions redundant. So remove it. Note that this aligns arm64 with ARM, whose accelerated CRC32 driver also combines the CRC32 extension based and the PMULL based versions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Matthias Brugger <mbrugger@suse.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes - don't use IV buffer to return final keystream blockArd Biesheuvel2017-02-032-11/+14
| | | | | | | | | | | | | | | | | | | The ARM bit sliced AES core code uses the IV buffer to pass the final keystream block back to the glue code if the input is not a multiple of the block size, so that the asm code does not have to deal with anything except 16 byte blocks. This is done under the assumption that the outgoing IV is meaningless anyway in this case, given that chaining is no longer possible under these circumstances. However, as it turns out, the CCM driver does expect the IV to retain a value that is equal to the original IV except for the counter value, and even interprets byte zero as a length indicator, which may result in memory corruption if the IV is overwritten with something else. So use a separate buffer to return the final keystream block. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes - don't use IV buffer to return final keystream blockArd Biesheuvel2017-02-032-18/+28
| | | | | | | | | | | | | | | | | | | The arm64 bit sliced AES core code uses the IV buffer to pass the final keystream block back to the glue code if the input is not a multiple of the block size, so that the asm code does not have to deal with anything except 16 byte blocks. This is done under the assumption that the outgoing IV is meaningless anyway in this case, given that chaining is no longer possible under these circumstances. However, as it turns out, the CCM driver does expect the IV to retain a value that is equal to the original IV except for the counter value, and even interprets byte zero as a length indicator, which may result in memory corruption if the IV is overwritten with something else. So use a separate buffer to return the final keystream block. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes - replace scalar fallback with plain NEON fallbackArd Biesheuvel2017-02-032-11/+29
| | | | | | | | | | | | | | | | | | The new bitsliced NEON implementation of AES uses a fallback in two places: CBC encryption (which is strictly sequential, whereas this driver can only operate efficiently on 8 blocks at a time), and the XTS tweak generation, which involves encrypting a single AES block with a different key schedule. The plain (i.e., non-bitsliced) NEON code is more suitable as a fallback, given that it is faster than scalar on low end cores (which is what the NEON implementations target, since high end cores have dedicated instructions for AES), and shows similar behavior in terms of D-cache footprint and sensitivity to cache timing attacks. So switch the fallback handling to the plain NEON driver. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-neon-blk - tweak performance for low end coresArd Biesheuvel2017-02-032-135/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | | The non-bitsliced AES implementation using the NEON is highly sensitive to micro-architectural details, and, as it turns out, the Cortex-A53 on the Raspberry Pi 3 is a core that can benefit from this code, given that its scalar AES performance is abysmal (32.9 cycles per byte). The new bitsliced AES code manages 19.8 cycles per byte on this core, but can only operate on 8 blocks at a time, which is not supported by all chaining modes. With a bit of tweaking, we can get the plain NEON code to run at 22.0 cycles per byte, making it useful for sequential modes like CBC encryption. (Like bitsliced NEON, the plain NEON implementation does not use any lookup tables, which makes it easy on the D-cache, and invulnerable to cache timing attacks) So tweak the plain NEON AES code to use tbl instructions rather than shl/sri pairs, and to avoid the need to reload permutation vectors or other constants from memory in every round. Also, improve the decryption performance by switching to 16x8 pmul instructions for the performing the multiplications in GF(2^8). To allow the ECB and CBC encrypt routines to be reused by the bitsliced NEON code in a subsequent patch, export them from the module. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes - performance tweakArd Biesheuvel2017-02-031-33/+19
| | | | | | | | Shuffle some instructions around in the __hround macro to shave off 0.1 cycles per byte on Cortex-A57. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes - avoid literals for cross-module symbol referencesArd Biesheuvel2017-02-031-5/+2
| | | | | | | | | | | | | Using simple adrp/add pairs to refer to the AES lookup tables exposed by the generic AES driver (which could be loaded far away from this driver when KASLR is in effect) was unreliable at module load time before commit 41c066f2c4d4 ("arm64: assembler: make adr_l work in modules under KASLR"), which is why the AES code used literals instead. So now we can get rid of the literals, and switch to the adr_l macro. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/chacha20 - remove cra_alignmaskArd Biesheuvel2017-02-031-1/+0
| | | | | | | | | Remove the unnecessary alignmask: it is much more efficient to deal with the misalignment in the core algorithm than relying on the crypto API to copy the data to a suitably aligned buffer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-blk - remove cra_alignmaskArd Biesheuvel2017-02-032-15/+9
| | | | | | | | | Remove the unnecessary alignmask: it is much more efficient to deal with the misalignment in the core algorithm than relying on the crypto API to copy the data to a suitably aligned buffer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm64/aes-ce-ccm - remove cra_alignmaskArd Biesheuvel2017-02-031-1/+0
| | | | | | | | | Remove the unnecessary alignmask: it is much more efficient to deal with the misalignment in the core algorithm than relying on the crypto API to copy the data to a suitably aligned buffer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/chacha20 - remove cra_alignmaskArd Biesheuvel2017-02-031-1/+0
| | | | | | | | | Remove the unnecessary alignmask: it is much more efficient to deal with the misalignment in the core algorithm than relying on the crypto API to copy the data to a suitably aligned buffer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arm/aes-ce - remove cra_alignmaskArd Biesheuvel2017-02-032-52/+47
| | | | | | | | | Remove the unnecessary alignmask: it is much more efficient to deal with the misalignment in the core algorithm than relying on the crypto API to copy the data to a suitably aligned buffer. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chcr - Fix Smatch ComplaintHarsh Jain2017-02-031-2/+3
| | | | | | | | Initialise variable after null check. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Harsh Jain <harsh@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chcr - Fix wrong typecastingHarsh Jain2017-02-031-5/+4
| | | | | | | Typecast the pointer with correct structure. Signed-off-by: Atul Gupta <atul.gupta@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chcr - Change algo priorityHarsh Jain2017-02-031-1/+1
| | | | | | | Update priorities to 3000 Signed-off-by: Harsh Jain <harsh@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chcr - Change cra_flags for cipher algosHarsh Jain2017-02-031-3/+3
| | | | | | | Change cipher algos flags to CRYPTO_ALG_TYPE_ABLKCIPHER. Signed-off-by: Harsh Jain <harsh@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chcr - Use cipher instead of Block Cipher in gcm setkeyHarsh Jain2017-02-031-11/+9
| | | | | | | | 1 Block of encrption can be done with aes-generic. no need of cbc(aes). This patch replaces cbc(aes-generic) with aes-generic. Signed-off-by: Harsh Jain <harsh@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chcr - fix itnull.cocci warningsHarsh Jain2017-02-031-1/+1
| | | | | | | | | | | The first argument to list_for_each_entry cannot be NULL. Generated by: scripts/coccinelle/iterators/itnull.cocci Signed-off-by: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Harsh Jain <harsh@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chcr - Change flow IDsHarsh Jain2017-02-034-12/+24
| | | | | | | | | Change assign flowc id to each outgoing request.Firmware use flowc id to schedule each request onto HW. FW reply may miss without this change. Reviewed-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: Atul Gupta <atul.gupta@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - add verbose debug facilities to print hw register namesCyrille Pitchen2017-02-031-2/+108
| | | | | | | | | When VERBOSE_DEBUG is defined and SHA_FLAGS_DUMP_REG flag is set in dd->flags, this patch prints the register names and values when performing IO accesses. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-authenc - add support to authenc(hmac(shaX), Y(aes)) modesCyrille Pitchen2017-02-036-15/+883
| | | | | | | | | | | | | | | | | | | This patchs allows to combine the AES and SHA hardware accelerators on some Atmel SoCs. Doing so, AES blocks are only written to/read from the AES hardware. Those blocks are also transferred from the AES to the SHA accelerator internally, without additionnal accesses to the system busses. Hence, the AES and SHA accelerators work in parallel to process all the data blocks, instead of serializing the process by (de)crypting those blocks first then authenticating them after like the generic crypto/authenc.c driver does. Of course, both the AES and SHA hardware accelerators need to be available before we can start to process the data blocks. Hence we use their crypto request queue to synchronize both drivers. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-aes - fix atmel_aes_handle_queue()Cyrille Pitchen2017-02-031-2/+5
| | | | | | | | | This patch fixes the value returned by atmel_aes_handle_queue(), which could have been wrong previously when the crypto request was started synchronously but became asynchronous during the ctx->start() call. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - add support to hmac(shaX)Cyrille Pitchen2017-02-032-1/+601
| | | | | | | This patch adds support to the hmac(shaX) algorithms. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - add simple DMA transfersCyrille Pitchen2017-02-031-0/+116
| | | | | | | | This patch adds a simple function to perform data transfer with the DMA controller. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - add atmel_sha_cpu_start()Cyrille Pitchen2017-02-031-0/+90
| | | | | | | | This patch adds a simple function to perform data transfer with PIO, hence handled by the CPU. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - add SHA_MR_MODE_IDATAR0Cyrille Pitchen2017-02-031-0/+1
| | | | | | | | This patch defines an alias macro to SHA_MR_MODE_PDC, which is not suited for DMA usage. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - add atmel_sha_wait_for_data_ready()Cyrille Pitchen2017-02-031-0/+13
| | | | | | | | | | | | | | | This patch simply defines a helper function to test the 'Data Ready' flag of the Status Register. It also gives a chance for the crypto request to be processed synchronously if this 'Data Ready' flag is already set when polling the Status Register. Indeed, running synchronously avoid the latency of the 'Data Ready' interrupt. When the 'Data Ready' flag has not been set yet, we enable the associated interrupt and resume processing the crypto request asynchronously from the 'done' task just as before. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - redefine SHA_FLAGS_SHA* flags to match SHA_MR_ALGO_SHA*Cyrille Pitchen2017-02-032-13/+33
| | | | | | | | | | | | | This patch modifies the SHA_FLAGS_SHA* flags: those algo flags are now organized as values of a single bitfield instead of individual bits. This allows to reduce the number of bits needed to encode all possible values. Also the new values match the SHA_MR_ALGO_SHA* values hence the algorithm bitfield of the SHA_MR register could simply be set with: mr = (mr & ~SHA_FLAGS_ALGO_MASK) | (ctx->flags & SHA_FLAGS_ALGO_MASK) Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - make atmel_sha_done_task more genericCyrille Pitchen2017-02-031-5/+16
| | | | | | | | | | This patch is a transitional patch. It updates atmel_sha_done_task() to make it more generic. Indeed, it adds a new .resume() member in the atmel_sha_dev structure. This hook is called from atmel_sha_done_task() to resume processing an asynchronous request. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - update request queue management to make it more genericCyrille Pitchen2017-02-031-20/+54
| | | | | | | | | | | | | | | | | | | | | | This patch is a transitional patch. It splits the atmel_sha_handle_queue() function. Now atmel_sha_handle_queue() only manages the request queue and calls a new .start() hook from the atmel_sha_ctx structure. This hook allows to implement different kind of requests still handled by a single queue. Also when the req parameter of atmel_sha_handle_queue() refers to the very same request as the one returned by crypto_dequeue_request(), the queue management now gives a chance to this crypto request to be handled synchronously, hence reducing latencies. The .start() hook returns 0 if the crypto request was handled synchronously and -EINPROGRESS if the crypto request still need to be handled asynchronously. Besides, the new .is_async member of the atmel_sha_dev structure helps tagging this asynchronous state. Indeed, the req->base.complete() callback should not be called if the crypto request is handled synchronously. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: atmel-sha - create function to get an Atmel SHA deviceCyrille Pitchen2017-02-031-4/+11
| | | | | | | | | This is a transitional patch: it creates the atmel_sha_find_dev() function, which will be used in further patches to share the source code responsible for finding a Atmel SHA device. Signed-off-by: Cyrille Pitchen <cyrille.pitchen@atmel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: doc - Fix hash export state informationRabin Vincent2017-02-032-7/+13
| | | | | | | | | The documentation states that crypto_ahash_reqsize() provides the size of the state structure used by crypto_ahash_export(). But it's actually crypto_ahash_statesize() which provides this size. Signed-off-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu2017-02-0310-89/+96
|\ | | | | | | Merge the crypto tree to pick up arm64 output IV patch.
| * crypto: chcr - Fix key length for RFC4106Harsh Jain2017-02-031-2/+2
| | | | | | | | | | | | | | Check keylen before copying salt to avoid wrap around of Integer. Signed-off-by: Harsh Jain <harsh@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: algif_aead - Fix kernel panic on list_delHarsh Jain2017-02-031-1/+1
| | | | | | | | | | | | | | | | | | | | Kernel panics when userspace program try to access AEAD interface. Remove node from Linked List before freeing its memory. Cc: <stable@vger.kernel.org> Signed-off-by: Harsh Jain <harsh@chelsio.com> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: aesni - Fix failure when pcbc module is absentHerbert Xu2017-02-031-4/+4
| | | | | | | | | | | | | | | | | | | | | | When aesni is built as a module together with pcbc, the pcbc module must be present for aesni to load. However, the pcbc module may not be present for reasons such as its absence on initramfs. This patch allows the aesni to function even if the pcbc module is enabled but not present. Reported-by: Arkadiusz Miśkiewicz <arekm@maven.pl> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: ccp - Fix double add when creating new DMA commandGary R Hook2017-02-032-1/+6
| | | | | | | | | | | | | | | | | | | | Eliminate a double-add by creating a new list to manage command descriptors when created; move the descriptor to the pending list when the command is submitted. Cc: <stable@vger.kernel.org> Signed-off-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: ccp - Fix DMA operations when IOMMU is enabledGary R Hook2017-02-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | An I/O page fault occurs when the IOMMU is enabled on a system that supports the v5 CCP. DMA operations use a Request ID value that does not match what is expected by the IOMMU, resulting in the I/O page fault. Setting the Request ID value to 0 corrects this issue. Cc: <stable@vger.kernel.org> Signed-off-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: chcr - Check device is allocated before useHarsh Jain2017-02-031-10/+8
| | | | | | | | | | | | | | | | | | Ensure dev is allocated for crypto uld context before using the device for crypto operations. Cc: <stable@vger.kernel.org> Signed-off-by: Atul Gupta <atul.gupta@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: chcr - Fix panic on dma_unmap_sgHarsh Jain2017-02-032-23/+29
| | | | | | | | | | | | | | Save DMA mapped sg list addresses to request context buffer. Signed-off-by: Atul Gupta <atul.gupta@chelsio.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: qat - zero esram only for DH85x devicesGiovanni Cabiddu2017-02-021-2/+2
| | | | | | | | | | | | | | | | | | Zero embedded ram in DH85x devices. This is not needed for newer generations as it is done by HW. Cc: <stable@vger.kernel.org> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: qat - fix bar discovery for c62xGiovanni Cabiddu2017-02-022-1/+2
| | | | | | | | | | | | | | | | | | Some accelerators of the c62x series have only two bars. This patch skips BAR0 if the accelerator does not have it. Cc: <stable@vger.kernel.org> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: arm64/aes-blk - honour iv_out requirement in CBC and CTR modesArd Biesheuvel2017-01-231-46/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | Update the ARMv8 Crypto Extensions and the plain NEON AES implementations in CBC and CTR modes to return the next IV back to the skcipher API client. This is necessary for chaining to work correctly. Note that for CTR, this is only done if the request is a round multiple of the block size, since otherwise, chaining is impossible anyway. Cc: <stable@vger.kernel.org> # v3.16+ Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - Clear CRYPTO_ALG_DEAD bit before registering an algSalvatore Benedetto2017-01-231-0/+1
| | | | | | | | | | | | | | | | | | | | Make sure CRYPTO_ALG_DEAD bit is cleared before proceeding with the algorithm registration. This fixes qat-dh registration when driver is restarted Cc: <stable@vger.kernel.org> Signed-off-by: Salvatore Benedetto <salvatore.benedetto@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: aesni - Fix failure when built-in with modular pcbcHerbert Xu2016-12-301-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If aesni is built-in but pcbc is built as a module, then aesni will fail completely because when it tries to register the pcbc variant of aes the pcbc template is not available. This patch fixes this by modifying the pcbc presence test so that if aesni is built-in then pcbc must also be built-in for it to be used by aesni. Fixes: 85671860caac ("crypto: aesni - Convert to skcipher") Reported-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>