summaryrefslogtreecommitdiffstats
path: root/drivers/crypto/marvell
Commit message (Collapse)AuthorAgeFilesLines
* crypto: marvell - Copy IVDIG before launching partial DMA ahash requestsRomain Perier2017-11-303-3/+43
| | | | | | | | | | | | | | | | | | | [ Upstream commit 8759fec4af222f338d08f8f1a7ad6a77ca6cb301 ] Currently, inner IV/DIGEST data are only copied once into the hash engines and not set explicitly before launching a request that is not a first frag. This is an issue especially when multiple ahash reqs are computed in parallel or chained with cipher request, as the state of the request being computed is not updated into the hash engine. It leads to non-deterministic corrupted digest results. Fixes: commit 2786cee8e50b ("crypto: marvell - Move SRAM I/O operations to step functions") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: marvell - Don't corrupt state of an STD req for re-stepped ahashRomain Perier2016-12-071-3/+5
| | | | | | | | | | | | | | | mv_cesa_hash_std_step() copies the creq->state into the SRAM at each step, but this is only required on the first one. By doing that, we overwrite the engine state, and get erroneous results when the crypto request is split in several chunks to fit in the internal SRAM. This commit changes the function to copy the state only on the first step. Fixes: commit 2786cee8e50b ("crypto: marvell - Move SRAM I/O op...") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Don't copy hash operation twice into the SRAMRomain Perier2016-12-071-3/+0
| | | | | | | | | | No need to copy the template of an hash operation twice into the SRAM from the step function. Fixes: commit 85030c5168f1 ("crypto: marvell - Add support for chai...") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Don't hardcode block size in mv_cesa_ahash_cache_reqRomain Perier2016-08-091-1/+1
| | | | | | | | Don't use 64 'as is', as max block size in mv_cesa_ahash_cache_req. Use CESA_MAX_HASH_BLOCK_SIZE instead, this is better for readability. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Don't overwrite default creq->state during initializationRomain Perier2016-08-091-6/+9
| | | | | | | | | | | | Currently, in mv_cesa_{md5,sha1,sha256}_init creq->state is initialized before the call to mv_cesa_ahash_init. This is wrong because this function fills creq with zero by using memset, so its 'state' that contains the default DIGEST is overwritten. This commit fixes the issue by initializing creq->state just after the call to mv_cesa_ahash_init. Fixes: commit b0ef51067cb4 ("crypto: marvell/cesa - initialize hash...") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Update transformation context for each dequeued reqRomain Perier2016-08-091-0/+1
| | | | | | | | | | | | | | | | | | So far, sub part of mv_cesa_int was responsible of dequeuing complete requests, then call the 'cleanup' operation on these reqs and call the crypto api callback 'complete'. The problem is that the transformation context 'ctx' is retrieved only once before the while loop. Which means that the wrong 'cleanup' operation might be called on the wrong type of cesa requests, it can lead to memory corruptions with this message: marvell-cesa f1090000.crypto: dma_pool_free cesa_padding, 5a5a5a5a/5a5a5a5a (bad dma) This commit fixes the issue, by updating the transformation context for each dequeued cesa request. Fixes: commit 85030c5168f1 ("crypto: marvell - Add support for chai...") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - make mv_cesa_ahash_cache_req() return boolThomas Petazzoni2016-08-091-11/+9
| | | | | | | | | | | | | | | The mv_cesa_ahash_cache_req() function always returns 0, which makes its return value pretty much useless. However, in addition to returning a useless value, it also returns a boolean in a variable passed by reference to indicate if the request was already cached. So, this commit changes mv_cesa_ahash_cache_req() to return this boolean. It consequently simplifies the only call site of mv_cesa_ahash_cache_req(), where the "ret" variable is no longer needed. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - turn mv_cesa_ahash_init() into a function returning voidThomas Petazzoni2016-08-091-3/+1
| | | | | | | | The mv_cesa_ahash_init() function always returns 0, and the return value is anyway never checked. Turn it into a function returning void. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - remove unused parameter in mv_cesa_ahash_dma_add_cache()Thomas Petazzoni2016-08-091-2/+1
| | | | | | | | The dma_iter parameter of mv_cesa_ahash_dma_add_cache() is never used, so get rid of it. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - be explicit about destination in mv_cesa_dma_add_op()Thomas Petazzoni2016-08-091-0/+1
| | | | | | | | | | | | | The mv_cesa_dma_add_op() function builds a mv_cesa_tdma_desc structure to copy the operation description to the SRAM, but doesn't explicitly initialize the destination of the copy. It works fine because the operatin description must be copied at the beginning of the SRAM, and the mv_cesa_tdma_desc structure is initialized to zero when allocated. However, it is somewhat confusing to not have a destination defined. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Don't copy IV vectors from the _process op for ciphersRomain Perier2016-07-291-10/+1
| | | | | | | | | | | | | The IV output vectors should only be copied from the _complete operation and not from the _process operation, i.e only from the operation that is designed to copy the result of the request to the right location. This copy is already done in the _complete operation, so this commit removes the duplicated code in the _process op. Fixes: 3610d6cd5231 ("crypto: marvell - Add a complete...") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Update cache with input sg only when it is unmappedRomain Perier2016-07-281-6/+6
| | | | | | | | | | | | | | | So far, the cache of the ahash requests was updated from the 'complete' operation. This complete operation is called from mv_cesa_tdma_process before the cleanup operation, which means that the content of req->src can be read and copied when it is still mapped. This commit fixes the issue by moving this cache update from mv_cesa_ahash_complete to mv_cesa_ahash_req_cleanup, so the copy is done once the sglist is unmapped. Fixes: 1bf6682cb31d ("crypto: marvell - Add a complete operation for..") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Don't chain at DMA level when backlog is disabledRomain Perier2016-07-281-3/+4
| | | | | | | | | | | | | The flag CRYPTO_TFM_REQ_MAY_BACKLOG is optional and can be set from the user to put requests into the backlog queue when the main cryptographic queue is full. Before calling mv_cesa_tdma_chain we must check the value of the return status to be sure that the current request has been correctly queued or added to the backlog. Fixes: 85030c5168f1 ("crypto: marvell - Add support for chaining...") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Fix memory leaks in TDMA chain for cipher requestsRomain Perier2016-07-281-8/+6
| | | | | | | | | | | | So far in mv_cesa_ablkcipher_dma_req_init, if an error is thrown while the tdma chain is built there is a memory leak. This issue exists because the chain is assigned later at the end of the function, so the cleanup function is called with the wrong version of the chain. Fixes: db509a45339f ("crypto: marvell/cesa - add TDMA support") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Fix wrong flag used for GFP in mv_cesa_dma_add_iv_opRomain Perier2016-07-191-1/+1
| | | | | | | | | | | Use the parameter 'gfp_flags' instead of 'flag' as second argument of dma_pool_alloc(). The parameter 'flag' is for the TDMA descriptor, its content has no sense for the allocator. Fixes: bac8e805a30d ("crypto: marvell - Copy IV vectors by DMA...") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Increase the size of the crypto queueRomain Perier2016-06-231-1/+1
| | | | | | | | | | | | Now that crypto requests are chained together at the DMA level, we increase the size of the crypto queue for each engine. The result is that as the backlog list is reached later, it does not stop the crypto stack from sending asychronous requests, so more cryptographic tasks are processed by the engines. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Add support for chaining crypto requests in TDMA modeRomain Perier2016-06-235-27/+221
| | | | | | | | | | | | | | | | | | | | The Cryptographic Engines and Security Accelerators (CESA) supports the Multi-Packet Chain Mode. With this mode enabled, multiple tdma requests can be chained and processed by the hardware without software intervention. This mode was already activated, however the crypto requests were not chained together. By doing so, we reduce significantly the number of IRQs. Instead of being interrupted at the end of each crypto request, we are interrupted at the end of the last cryptographic request processed by the engine. This commits re-factorizes the code, changes the code architecture and adds the required data structures to chain cryptographic requests together before sending them to an engine (stopped or possibly already running). Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Add load balancing between enginesRomain Perier2016-06-234-86/+84
| | | | | | | | | | | | | | | This commits adds support for fine grained load balancing on multi-engine IPs. The engine is pre-selected based on its current load and on the weight of the crypto request that is about to be processed. The global crypto queue is also moved to each engine. These changes are required to allow chaining crypto requests at the DMA level. By using a crypto queue per engine, we make sure that we keep the state of the tdma chain synchronized with the crypto queue. We also reduce contention on 'cesa_dev->lock' and improve parallelism. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Move SRAM I/O operations to step functionsRomain Perier2016-06-232-12/+12
| | | | | | | | | | | Currently the crypto requests were sent to engines sequentially. This commit moves the SRAM I/O operations from the prepare to the step functions. It provides flexibility for future works and allow to prepare a request while the engine is running. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Add a complete operation for async requestsRomain Perier2016-06-234-15/+39
| | | | | | | | | | | | | | | | So far, the 'process' operation was used to check if the current request was correctly handled by the engine, if it was the case it copied information from the SRAM to the main memory. Now, we split this operation. We keep the 'process' operation, which still checks if the request was correctly handled by the engine or not, then we add a new operation for completion. The 'complete' method copies the content of the SRAM to memory. This will soon become useful if we want to call the process and the complete operations from different locations depending on the type of the request (different cleanup logic). Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Move tdma chain out of mv_cesa_tdma_req and remove itRomain Perier2016-06-235-98/+84
| | | | | | | | | | | | | | | | | | | | | Currently, the only way to access the tdma chain is to use the 'req' union from a mv_cesa_{ablkcipher,ahash}. This will soon become a problem if we want to handle the TDMA chaining vs standard/non-DMA processing in a generic way (with generic functions at the cesa.c level detecting whether the request should be queued at the DMA level or not). Hence the decision to move the chain field a the mv_cesa_req level at the expense of adding 2 void * fields to all request contexts (including non-DMA ones) and to remove the type completly. To limit the overhead, we get rid of the type field, which can now be deduced from the req->chain.first value. Once these changes are done the union is no longer needed, so remove it and move mv_cesa_ablkcipher_std_req and mv_cesa_req to mv_cesa_ablkcipher_req directly. There are also no needs to keep the 'base' field into the union of mv_cesa_ahash_req, so move it into the upper structure. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Copy IV vectors by DMA transfers for acipher requestsRomain Perier2016-06-234-9/+60
| | | | | | | | | | | | | Add a TDMA descriptor at the end of the request for copying the output IV vector via a DMA transfer. This is a good way for offloading as much as processing as possible to the DMA and the crypto engine. This is also required for processing multiple cipher requests in chained mode, otherwise the content of the IV vector would be overwritten by the last processed request. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Fix wrong type check in dma functionsRomain Perier2016-06-231-2/+3
| | | | | | | | | | So far, the way that the type of a TDMA operation was checked was wrong. We have to use the type mask in order to get the right part of the flag containing the type of the operation. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Check engine is not already running when enabling a reqRomain Perier2016-06-233-0/+6
| | | | | | | | | | | | Add a BUG_ON() call when the driver tries to launch a crypto request while the engine is still processing the previous one. This replaces a silent system hang by a verbose kernel panic with the associated backtrace to let the user know that something went wrong in the CESA driver. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - Add a macro constant for the size of the crypto queueRomain Perier2016-06-231-1/+4
| | | | | | | | | | Adding a macro constant to be used for the size of the crypto queue, instead of using a numeric value directly. It will be easier to maintain in case we add more than one crypto queue of the same size. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell/cesa - Use dma_pool_zallocJulia Lawall2016-05-031-3/+2
| | | | | | | | | | | | | | | | | | | | | | | Dma_pool_zalloc combines dma_pool_alloc and memset 0. The semantic patch that makes this transformation is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression d,e; statement S; @@ d = - dma_pool_alloc + dma_pool_zalloc (...); if (!d) S - memset(d, 0, sizeof(*d)); // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell/cesa - Improving code readabilityRomain Perier2016-04-201-5/+5
| | | | | | | | | | | When looking for available engines, the variable "engine" is assigned to "&cesa->engines[i]" at the beginning of the for loop. Replacing next occurences of "&cesa->engines[i]" by "engine" and in order to improve readability. Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell/cesa - remove unneeded conditionDan Carpenter2016-04-051-2/+1
| | | | | | | | | creq->cache[] is an array inside the struct, it's not a pointer and it can't be NULL. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'linus' of ↵Linus Torvalds2016-03-233-70/+41
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: "This fixes the following issues: API: - Fix kzalloc error path crash in ecryptfs added by skcipher conversion. Note the subject of the commit is screwed up and the correct subject is actually in the body. Drivers: - A number of fixes to the marvell cesa hashing code. - Remove bogus nested irqsave that clobbers the saved flags in ccp" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: marvell/cesa - forward devm_ioremap_resource() error code crypto: marvell/cesa - initialize hash states crypto: marvell/cesa - fix memory leak crypto: ccp - fix lock acquisition code eCryptfs: Use skcipher and shash
| * crypto: marvell/cesa - forward devm_ioremap_resource() error codeBoris BREZILLON2016-03-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | Forward devm_ioremap_resource() error code instead of returning -ENOMEM. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Reported-by: Russell King - ARM Linux <linux@arm.linux.org.uk> Fixes: f63601fd616a ("crypto: marvell/cesa - add a new driver for Marvell's CESA") Cc: <stable@vger.kernel.org> # 4.2+ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - initialize hash statesBoris BREZILLON2016-03-171-0/+20
| | | | | | | | | | | | | | | | | | ->export() might be called before we have done an update operation, and in this case the ->state field is left uninitialized. Put the correct default value when initializing the request. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - fix memory leakBoris BREZILLON2016-03-172-69/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Crypto requests are not guaranteed to be finalized (->final() call), and can be freed at any moment, without getting any notification from the core. This can lead to memory leaks of the ->cache buffer. Make this buffer part of the request object, and allocate an extra buffer from the DMA cache pool when doing DMA operations. As a side effect, this patch also fixes another bug related to cache allocation and DMA operations. When the core allocates a new request and import an existing state, a cache buffer can be allocated (depending on the state). The problem is, at that very moment, we don't know yet whether the request will use DMA or not, and since everything is likely to be initialized to zero, mv_cesa_ahash_alloc_cache() thinks it should allocate a buffer for standard operation. But when mv_cesa_ahash_free_cache() is called, req->type has been set to CESA_DMA_REQ in the meantime, thus leading to an invalind dma_pool_free() call (the buffer passed in argument has not been allocated from the pool). Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Reported-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: marvell/cesa - fix test in mv_cesa_dev_dma_init()Boris BREZILLON2016-02-061-1/+1
|/ | | | | | | | | We are checking twice if dma->cache_pool is not NULL but are never testing dma->padding_pool value. Cc: stable@vger.kernel.org Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: marvell - check return value of sg_nents_for_lenLABBE Corentin2015-11-172-0/+12
| | | | | | | | The sg_nents_for_len() function could fail, this patch add a check for its return value. Signed-off-by: LABBE Corentin <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'linus' of ↵Linus Torvalds2015-11-044-295/+286
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto update from Herbert Xu: "API: - Add support for cipher output IVs in testmgr - Add missing crypto_ahash_blocksize helper - Mark authenc and des ciphers as not allowed under FIPS. Algorithms: - Add CRC support to 842 compression - Add keywrap algorithm - A number of changes to the akcipher interface: + Separate functions for setting public/private keys. + Use SG lists. Drivers: - Add Intel SHA Extension optimised SHA1 and SHA256 - Use dma_map_sg instead of custom functions in crypto drivers - Add support for STM32 RNG - Add support for ST RNG - Add Device Tree support to exynos RNG driver - Add support for mxs-dcp crypto device on MX6SL - Add xts(aes) support to caam - Add ctr(aes) and xts(aes) support to qat - A large set of fixes from Russell King for the marvell/cesa driver" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (115 commits) crypto: asymmetric_keys - Fix unaligned access in x509_get_sig_params() crypto: akcipher - Don't #include crypto/public_key.h as the contents aren't used hwrng: exynos - Add Device Tree support hwrng: exynos - Fix missing configuration after suspend to RAM hwrng: exynos - Add timeout for waiting on init done dt-bindings: rng: Describe Exynos4 PRNG bindings crypto: marvell/cesa - use __le32 for hardware descriptors crypto: marvell/cesa - fix missing cpu_to_le32() in mv_cesa_dma_add_op() crypto: marvell/cesa - use memcpy_fromio()/memcpy_toio() crypto: marvell/cesa - use gfp_t for gfp flags crypto: marvell/cesa - use dma_addr_t for cur_dma crypto: marvell/cesa - use readl_relaxed()/writel_relaxed() crypto: caam - fix indentation of close braces crypto: caam - only export the state we really need to export crypto: caam - fix non-block aligned hash calculation crypto: caam - avoid needlessly saving and restoring caam_hash_ctx crypto: caam - print errno code when hash registration fails crypto: marvell/cesa - fix memory leak crypto: marvell/cesa - fix first-fragment handling in mv_cesa_ahash_dma_last_req() crypto: marvell/cesa - rearrange handling for sw padded hashes ...
| * crypto: marvell/cesa - use __le32 for hardware descriptorsRussell King2015-10-202-19/+22
| | | | | | | | | | | | | | | | | | | | Much of the driver uses cpu_to_le32() to convert values for descriptors to little endian before writing. Use __le32 to define the hardware- accessed parts of the descriptors, and ensure most places where it's reasonable to do so use cpu_to_le32() when assigning to these. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - fix missing cpu_to_le32() in mv_cesa_dma_add_op()Russell King2015-10-201-1/+1
| | | | | | | | | | | | | | | | | | | | When tdma->src is freed in mv_cesa_dma_cleanup(), we convert the DMA address from a little-endian value prior to calling dma_pool_free(). However, mv_cesa_dma_add_op() assigns tdma->src without first converting the DMA address to little endian. Fix this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - use memcpy_fromio()/memcpy_toio()Russell King2015-10-202-13/+14
| | | | | | | | | | | | | | | | Use the IO memcpy() functions when copying from/to MMIO memory. These locations were found via sparse. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - use gfp_t for gfp flagsRussell King2015-10-202-7/+4
| | | | | | | | | | | | | | Use gfp_t not u32 for the GFP flags. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - use dma_addr_t for cur_dmaRussell King2015-10-202-4/+6
| | | | | | | | | | | | | | | | cur_dma is part of the software state, not read by the hardware. Storing it in LE32 format is wrong, use dma_addr_t for this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - use readl_relaxed()/writel_relaxed()Russell King2015-10-204-16/+15
| | | | | | | | | | | | | | Use relaxed IO accessors where appropriate. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - fix memory leakBoris Brezillon2015-10-201-12/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | To: Boris Brezillon <boris.brezillon@free-electrons.com>,Arnaud Ebalard <arno@natisbad.org>,Thomas Petazzoni <thomas.petazzoni@free-electrons.com>,Jason Cooper <jason@lakedaemon.net> The local chain variable is not cleaned up if an error occurs in the middle of DMA chain creation. Fix that by dropping the local chain variable and using the dreq->chain field which will be cleaned up by mv_cesa_dma_cleanup() in case of errors. Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - fix first-fragment handling in ↵Russell King2015-10-201-8/+0
| | | | | | | | | | | | | | | | | | | | | | mv_cesa_ahash_dma_last_req() When adding the software padding, this must be done using the first/mid fragment mode, and any subsequent operation needs to be a mid-fragment. Fix this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - rearrange handling for sw padded hashesRussell King2015-10-201-26/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rearrange the last request handling for hashes which require software padding. We prepare the padding to be appended, and then append as much of the padding to any existing data that's already queued up, adding an operation block and launching the operation. Any remainder is then appended as a separate operation. This ensures that the hardware only ever sees multiples of the hash block size to be operated on for software padded hashes, thus ensuring that the engine always indicates that it has finished the calculation. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - rearrange handling for hw finished hashesRussell King2015-10-201-11/+24
| | | | | | | | | | | | | | | | | | | | | | Rearrange the last request handling for hardware finished hashes by moving the generation of the fragment operation into this path. This results in a simplified sequence to handle this case, and allows us to move the software padded case further down into the function. Add comments describing these parts. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - rearrange last request handlingRussell King2015-10-201-11/+19
| | | | | | | | | | | | | | | | | | Move the test for the last request out of mv_cesa_ahash_dma_last_req() to its caller, and move the mv_cesa_dma_add_frag() down into this function. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - avoid adding final operation within loopRussell King2015-10-201-7/+17
| | | | | | | | | | | | | | | | Avoid adding the final operation within the loop, but instead add it outside. We combine this with the handling for the no-data case. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - ensure iter.base.op_len is the full op lengthRussell King2015-10-201-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we process the last request of data, and the request contains user data, the loop in mv_cesa_ahash_dma_req_init() marks the first data size as being iter.base.op_len which does not include the size of the cache data. This means we end up hashing an insufficient amount of data. Fix this by always including the cache size in the first operation length of any request. This has the effect that for a request containing no user data, iter.base.op_len === iter.src.op_offset === creq->cache_ptr As a result, we include one further change to use iter.base.op_len in the cache-but-no-user-data case to make the next change clearer. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - use presence of scatterlist to determine data loadRussell King2015-10-201-18/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the presence of the scatterlist to determine whether we should load any new user data to the engine. The following shall always be true at this point: iter.base.op_len == 0 === iter.src.sg In doing so, we can: 1. eliminate the test for iter.base.op_len inside the loop, which makes the loop operation more obvious and understandable. 2. move the operation generation for the cache-only case. This prepares the code for the next step in its transformation, and also uncovers a bug that will be fixed in the next patch. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: marvell/cesa - move mv_cesa_dma_add_frag() callsRussell King2015-10-201-42/+29
| | | | | | | | | | | | | | | | | | | | Move the calls to mv_cesa_dma_add_frag() into the parent function, mv_cesa_ahash_dma_req_init(). This is in preparation to changing when we generate the operation blocks, as we need to avoid generating a block for a partial hash block at the end of the user data. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>