summaryrefslogtreecommitdiffstats
path: root/crypto/Makefile
Commit message (Collapse)AuthorAgeFilesLines
* crypto: twofish: Rename twofish to twofish_generic and add an aliasJoachim Fritschi2010-06-031-1/+1
| | | | | | | | | | | | | | | This fixes the broken autoloading of the corresponding twofish assembler ciphers on x86 and x86_64 if they are available. The module name of the generic implementation was in conflict with the alias in the assembler modules. The generic twofish c implementation is renamed to twofish_generic according to the other algorithms with assembler implementations and an module alias is added for 'twofish'. You can now load 'twofish' giving you the best implementation by priority, 'twofish-generic' to get the c implementation or 'twofish-asm' to get the assembler version of cipher. Signed-off-by: Joachim Fritschi <jfritschi@freenet.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: pcomp - Fix illegal Kconfig configurationHerbert Xu2010-06-031-1/+1
| | | | | | | | | | | | | | | | | The PCOMP Kconfig entry current allows the following combination which is illegal: ZLIB=y PCOMP=y ALGAPI=m ALGAPI2=y MANAGER=m MANAGER2=m This patch fixes this by adding PCOMP2 so that PCOMP can select ALGAPI to propagate the setting to MANAGER2. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: pcrypt - Add pcrypt crypto parallelization wrapperSteffen Klassert2010-01-071-0/+1
| | | | | | | | | This patch adds a parallel crypto template that takes a crypto algorithm and converts it to process the crypto transforms in parallel. For the moment only aead algorithms are supported. Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: vmac - New hash algorithm for intel_txt supportShane Wang2009-09-021-0/+1
| | | | | | | | This patch adds VMAC (a fast MAC) support into crypto framework. Signed-off-by: Shane Wang <shane.wang@intel.com> Signed-off-by: Joseph Cihula <joseph.cihula@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ghash - Add GHASH digest algorithm for GCMHuang Ying2009-08-061-0/+1
| | | | | | | | | GHASH is implemented as a shash algorithm. The actual implementation is copied from gcm.c. This makes it possible to add architecture/hardware accelerated GHASH implementation. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Remove legacy hash/digest implementaionHerbert Xu2009-07-141-2/+1
| | | | | | | | This patch removes the implementation of hash and digest now that no algorithms use them anymore. The interface though will remain until the users are converted across. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: zlib - New zlib crypto module, using pcompGeert Uytterhoeven2009-03-041-0/+1
| | | | | | Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Cc: James Morris <jmorris@namei.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: compress - Add pcomp interfaceGeert Uytterhoeven2009-03-041-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current "comp" crypto interface supports one-shot (de)compression only, i.e. the whole data buffer to be (de)compressed must be passed at once, and the whole (de)compressed data buffer will be received at once. In several use-cases (e.g. compressed file systems that store files in big compressed blocks), this workflow is not suitable. Furthermore, the "comp" type doesn't provide for the configuration of (de)compression parameters, and always allocates workspace memory for both compression and decompression, which may waste memory. To solve this, add a "pcomp" partial (de)compression interface that provides the following operations: - crypto_compress_{init,update,final}() for compression, - crypto_decompress_{init,update,final}() for decompression, - crypto_{,de}compress_setup(), to configure (de)compression parameters (incl. allocating workspace memory). The (de)compression methods take a struct comp_request, which was mimicked after the z_stream object in zlib, and contains buffer pointer and length pairs for input and output. The setup methods take an opaque parameter pointer and length pair. Parameters are supposed to be encoded using netlink attributes, whose meanings depend on the actual (name of the) (de)compression algorithm. Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Use dedicated workqueue for crypto subsystemHuang Ying2009-02-191-0/+2
| | | | | | | | | | | Use dedicated workqueue for crypto subsystem A dedicated workqueue named kcrypto_wq is created to be used by crypto subsystem. The system shared keventd_wq is not suitable for encryption/decryption, because of potential starvation problem. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Add shash interfaceHerbert Xu2008-12-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | The shash interface replaces the current synchronous hash interface. It improves over hash in two ways. Firstly shash is reentrant, meaning that the same tfm may be used by two threads simultaneously as all hashing state is stored in a local descriptor. The other enhancement is that shash no longer takes scatter list entries. This is because shash is specifically designed for synchronous algorithms and as such scatter lists are unnecessary. All existing hash users will be converted to shash once the algorithms have been completely converted. There is also a new finup function that combines update with final. This will be extended to ahash once the algorithm conversion is done. This is also the first time that an algorithm type has their own registration function. Existing algorithm types will be converted to this way in due course. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Disallow cryptomgr as a module if algorithms are built-inHerbert Xu2008-12-101-9/+9
| | | | | | | | | | | | | If we have at least one algorithm built-in then it no longer makes sense to have the testing framework, and hence cryptomgr to be a module. It should be either on or off, i.e., built-in or disabled. This just happens to stop a potential runaway modprobe loop that seems to trigger on at least one distro. With fixes from Evgeniy Polyakov. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rng - RNG interface and implementationNeil Horman2008-08-291-1/+3
| | | | | | | | | | | | This patch adds a random number generator interface as well as a cryptographic pseudo-random number generator based on AES. It is meant to be used in cases where a deterministic CPRNG is required. One of the first applications will be as an input in the IPsec IV generation process. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Add fips_enable flagNeil Horman2008-08-291-0/+2
| | | | | | | | | | | | | | | Add the ability to turn FIPS-compliant mode on or off at boot In order to be FIPS compliant, several check may need to be preformed that may be construed as unusefull in a non-compliant mode. This patch allows us to set a kernel flag incating that we are running in a fips-compliant mode from boot up. It also exports that mode information to user space via a sysctl (/proc/sys/crypto/fips_enabled). Tested successfully by me. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - Move IV generators into their own modulesHerbert Xu2008-08-291-2/+2
| | | | | | | | This patch moves the default IV generators into their own modules in order to break a dependency loop between cryptomgr, rng, and blkcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cryptomgr - Add test infrastructureHerbert Xu2008-08-291-0/+2
| | | | | | | | This patch moves the newly created alg_test infrastructure into cryptomgr. This shall allow us to use it for testing at algorithm registrations. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Revert crypto: prng - Deterministic CPRNGHerbert Xu2008-07-151-1/+1
| | | | | | This patch is clearly not ready yet for prime time. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: prng - Deterministic CPRNGNeil Horman2008-07-101-1/+1
| | | | | | | | | | | | This patch adds a cryptographic pseudo-random number generator based on CTR(AES-128). It is meant to be used in cases where a deterministic CPRNG is required. One of the first applications will be as an input in the IPsec IV generation process. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] hash: Add asynchronous hash supportLoc Ho2008-07-101-0/+1
| | | | | | | This patch adds asynchronous hash and digest support. Signed-off-by: Loc Ho <lho@amcc.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] ripemd: Add support for RIPEMD-256 and RIPEMD-320Adrian-Ken Rueegsegger2008-07-101-0/+2
| | | | | | | | This patch adds support for the extended RIPEMD hash algorithms RIPEMD-256 and RIPEMD-320. Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] ripemd: Add support for RIPEMD hash algorithmsAdrian-Ken Rueegsegger2008-07-101-0/+2
| | | | | | | | This patch adds support for RIPEMD-128 and RIPEMD-160 hash algorithms. Signed-off-by: Adrian-Ken Rueegsegger <rueegsegger@swiss-it.ch> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Make the crypto subsystem fully modularSebastian Siewior2008-04-211-1/+2
| | | | | Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] cts: Add CTS mode required for Kerberos AES supportKevin Coffman2008-04-211-0/+1
| | | | | | | | Implement CTS wrapper for CBC mode required for support of AES encryption support for Kerberos (rfc3962). Signed-off-by: Kevin Coffman <kwc@citi.umich.edu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] sha512: Rename sha512 to sha512_genericJan Glauber2008-04-211-1/+1
| | | | | | | | | | Rename sha512 to sha512_generic and add a MODULE_ALIAS for sha512 so all sha512 implementations can be loaded automatically. Keep the broken tabs so git recognizes this as a rename. Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] skcipher: Move chainiv/seqiv into crypto_blkcipher moduleHerbert Xu2008-02-231-2/+2
| | | | | | | | For compatibility with dm-crypt initramfs setups it is useful to merge chainiv/seqiv into the crypto_blkcipher module. Since they're required by most algorithms anyway this is an acceptable trade-off. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] ccm: Added CCM modeJoy Latten2008-01-111-0/+1
| | | | | | | | This patch adds Counter with CBC-MAC (CCM) support. RFC 3610 and NIST Special Publication 800-38C were referenced. Signed-off-by: Joy Latten <latten@austin.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] seqiv: Add Sequence Number IV GeneratorHerbert Xu2008-01-111-0/+1
| | | | | | | | | This generator generates an IV based on a sequence number by xoring it with a salt. This algorithm is mainly useful for CTR and similar modes. This patch also sets it as the default IV generator for ctr. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] eseqiv: Add Encrypted Sequence Number IV GeneratorHerbert Xu2008-01-111-0/+1
| | | | | | | | | | | | | | This generator generates an IV based on a sequence number by xoring it with a salt and then encrypting it with the same key as used to encrypt the plain text. This algorithm requires that the block size be equal to the IV size. It is mainly useful for CBC. It has one noteworthy property that for IPsec the IV happens to lie just before the plain text so the IV generation simply increases the number of encrypted blocks by one. Therefore the cost of this generator is entirely dependent on the speed of the underlying cipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] chainiv: Add chain IV generatorHerbert Xu2008-01-111-0/+1
| | | | | | | | | | | | | | | The chain IV generator is the one we've been using in the IPsec stack. It simply starts out with a random IV, then uses the last block of each encrypted packet's cipher text as the IV for the next packet. It can only be used by synchronous ciphers since we have to make sure that we don't start the encryption of the next packet until the last one has completed. It does have the advantage of using very little CPU time since it doesn't have to generate anything at all. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] blkcipher: Merge ablkcipher and blkcipher into one option/moduleHerbert Xu2008-01-111-2/+4
| | | | | | | | | | | | | With the impending addition of the givcipher type, both blkcipher and ablkcipher algorithms will use it to create givcipher objects. As such it no longer makes sense to split the system between ablkcipher and blkcipher. In particular, both ablkcipher.c and blkcipher.c would need to use the givcipher type which has to reside in ablkcipher.c since it shares much code with it. This patch merges the two Kconfig options as well as the modules into one. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] lzo: Add LZO compression algorithm supportZoltan Sogor2008-01-111-0/+1
| | | | | | | Add LZO compression algorithm support Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] gcm: New algorithmMikko Herranen2008-01-111-0/+1
| | | | | | | | | | | Add GCM/GMAC support to cryptoapi. GCM (Galois/Counter Mode) is an AEAD mode of operations for any block cipher with a block size of 16. The typical example is AES-GCM. Signed-off-by: Mikko Herranen <mh1@iki.fi> Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] salsa20: Salsa20 stream cipherTan Swee Heng2008-01-111-0/+1
| | | | | | | | | | | | | | | | This patch implements the Salsa20 stream cipher using the blkcipher interface. The core cipher code comes from Daniel Bernstein's submission to eSTREAM: http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/ref/ The test vectors comes from: http://www.ecrypt.eu.org/stream/svn/viewcvs.cgi/ecrypt/trunk/submissions/salsa20/full/ It has been tested successfully with "modprobe tcrypt mode=34" on an UML instance. Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] ctr: Add CTR (Counter) block cipher modeJoy Latten2008-01-111-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch implements CTR mode for IPsec. It is based off of RFC 3686. Please note: 1. CTR turns a block cipher into a stream cipher. Encryption is done in blocks, however the last block may be a partial block. A "counter block" is encrypted, creating a keystream that is xor'ed with the plaintext. The counter portion of the counter block is incremented after each block of plaintext is encrypted. Decryption is performed in same manner. 2. The CTR counterblock is composed of, nonce + IV + counter The size of the counterblock is equivalent to the blocksize of the cipher. sizeof(nonce) + sizeof(IV) + sizeof(counter) = blocksize The CTR template requires the name of the cipher algorithm, the sizeof the nonce, and the sizeof the iv. ctr(cipher,sizeof_nonce,sizeof_iv) So for example, ctr(aes,4,8) specifies the counterblock will be composed of 4 bytes from a nonce, 8 bytes from the iv, and 4 bytes for counter since aes has a blocksize of 16 bytes. 3. The counter portion of the counter block is stored in big endian for conformance to rfc 3686. Signed-off-by: Joy Latten <latten@austin.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] sha: Load the SHA[1|256] module by an aliasSebastian Siewior2007-10-101-2/+2
| | | | | | | | | | | | | | | | | | | | | Loading the crypto algorithm by the alias instead of by module directly has the advantage that all possible implementations of this algorithm are loaded automatically and the crypto API can choose the best one depending on its priority. Additionally it ensures that the generic implementation as well as the HW driver (if available) is loaded in case the HW driver needs the generic version as fallback in corner cases. Also remove the probe for sha1 in padlock's init code. Quote from Herbert: The probe is actually pointless since we can always probe when the algorithm is actually used which does not lead to dead-locks like this. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] aes: Rename aes to aes-genericSebastian Siewior2007-10-101-1/+1
| | | | | | | | | | | | | | Loading the crypto algorithm by the alias instead of by module directly has the advantage that all possible implementations of this algorithm are loaded automatically and the crypto API can choose the best one depending on its priority. Additionally it ensures that the generic implementation as well as the HW driver (if available) is loaded in case the HW driver needs the generic version as fallback in corner cases. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] des: Rename des to des-genericSebastian Siewior2007-10-101-1/+1
| | | | | | | | | | Loading the crypto algorithm by the alias instead of by module directly has the advantage that all possible implementations of this algorithm are loaded automatically and the crypto API can choose the best one depending on its priority. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] xts: XTS blockcipher mode implementation without partial blocksRik Snel2007-10-101-0/+1
| | | | | | | | | | | | | | | XTS currently considered to be the successor of the LRW mode by the IEEE1619 workgroup. LRW was discarded, because it was not secure if the encyption key itself is encrypted with LRW. XTS does not have this problem. The implementation is pretty straightforward, a new function was added to gf128mul to handle GF(128) elements in ble format. Four testvectors from the specification http://grouper.ieee.org/groups/1619/email/pdf00086.pdf were added, and they verify on my system. Signed-off-by: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] aead: Add authencHerbert Xu2007-10-101-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the authenc algorithm which constructs an AEAD algorithm from an asynchronous block cipher and a hash. The construction is done by concatenating the encrypted result from the cipher with the output from the hash, as is used by the IPsec ESP protocol. The authenc algorithm exists as a template with four parameters: authenc(auth, authsize, enc, enckeylen). The authentication algorithm, the authentication size (i.e., truncating the output of the authentication algorithm), the encryption algorithm, and the encryption key length. Both the size field and the key length field are in bytes. For example, AES-128 with SHA1-HMAC would be represented by authenc(hmac(sha1), 12, cbc(aes), 16) The key for the authenc algorithm is the concatenation of the keys for the authentication algorithm with the encryption algorithm. For the above example, if a key of length 36 bytes is given, then hmac(sha1) would receive the first 20 bytes while the last 16 would be given to cbc(aes). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Move scatterwalk into algapiHerbert Xu2007-10-101-2/+2
| | | | | | | The scatterwalk code is only used by algorithms that can be built as a module. Therefore we can move it into algapi. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add aead crypto typeHerbert Xu2007-10-101-0/+1
| | | | | | | | | | | | | | | | | This patch adds crypto_aead which is the interface for AEAD (Authenticated Encryption with Associated Data) algorithms. AEAD algorithms perform authentication and encryption in one step. Traditionally users (such as IPsec) would use two different crypto algorithms to perform these. With AEAD this comes down to one algorithm and one operation. Of course if traditional algorithms were used we'd still be doing two operations underneath. However, real AEAD algorithms may allow the underlying operations to be optimised as well. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] seed: New cipher algorithmHye-Shik Chang2007-10-101-0/+1
| | | | | | | | | | | | | | This patch adds support for the SEED cipher (RFC4269). This patch have been used in few VPN appliance vendors in Korea for several years. And it was verified by KISA, who developed the algorithm itself. As its importance in Korean banking industry, it would be great if linux incorporates the support. Signed-off-by: Hye-Shik Chang <perky@FreeBSD.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* async_tx: add the async_tx apiDan Williams2007-07-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The async_tx api provides methods for describing a chain of asynchronous bulk memory transfers/transforms with support for inter-transactional dependencies. It is implemented as a dmaengine client that smooths over the details of different hardware offload engine implementations. Code that is written to the api can optimize for asynchronous operation and the api will fit the chain of operations to the available offload resources. I imagine that any piece of ADMA hardware would register with the 'async_*' subsystem, and a call to async_X would be routed as appropriate, or be run in-line. - Neil Brown async_tx exploits the capabilities of struct dma_async_tx_descriptor to provide an api of the following general format: struct dma_async_tx_descriptor * async_<operation>(..., struct dma_async_tx_descriptor *depend_tx, dma_async_tx_callback cb_fn, void *cb_param) { struct dma_chan *chan = async_tx_find_channel(depend_tx, <operation>); struct dma_device *device = chan ? chan->device : NULL; int int_en = cb_fn ? 1 : 0; struct dma_async_tx_descriptor *tx = device ? device->device_prep_dma_<operation>(chan, len, int_en) : NULL; if (tx) { /* run <operation> asynchronously */ ... tx->tx_set_dest(addr, tx, index); ... tx->tx_set_src(addr, tx, index); ... async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param); } else { /* run <operation> synchronously */ ... <operation> ... async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param); } return tx; } async_tx_find_channel() returns a capable channel from its pool. The channel pool is organized as a per-cpu array of channel pointers. The async_tx_rebalance() routine is tasked with managing these arrays. In the uniprocessor case async_tx_rebalance() tries to spread responsibility evenly over channels of similar capabilities. For example if there are two copy+xor channels, one will handle copy operations and the other will handle xor. In the SMP case async_tx_rebalance() attempts to spread the operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor channel0 while cpu1 gets copy channel 1 and xor channel 1. When a dependency is specified async_tx_find_channel defaults to keeping the operation on the same channel. A xor->copy->xor chain will stay on one channel if it supports both operation types, otherwise the transaction will transition between a copy and a xor resource. Currently the raid5 implementation in the MD raid456 driver has been converted to the async_tx api. A driver for the offload engines on the Intel Xscale series of I/O processors, iop-adma, is provided in a later commit. With the iop-adma driver and async_tx, raid456 is able to offload copy, xor, and xor-zero-sum operations to hardware engines. On iop342 tiobench showed higher throughput for sequential writes (20 - 30% improvement) and sequential reads to a degraded array (40 - 55% improvement). For the other cases performance was roughly equal, +/- a few percentage points. On a x86-smp platform the performance of the async_tx implementation (in synchronous mode) was also +/- a few percentage points of the original implementation. According to 'top' on iop342 CPU utilization drops from ~50% to ~15% during a 'resync' while the speed according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s. The tiobench command line used for testing was: tiobench --size 2048 --block 4096 --block 131072 --dir /mnt/raid --numruns 5 * iop342 had 1GB of memory available Details: * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making async_tx_find_channel a static inline routine that always returns NULL * when a callback is specified for a given transaction an interrupt will fire at operation completion time and the callback will occur in a tasklet. if the the channel does not support interrupts then a live polling wait will be performed * the api is written as a dmaengine client that requests all available channels * In support of dependencies the api implicitly schedules channel-switch interrupts. The interrupt triggers the cleanup tasklet which causes pending operations to be scheduled on the next channel * Xor engines treat an xor destination address differently than a software xor routine. To the software routine the destination address is an implied source, whereas engines treat it as a write-only destination. This patch modifies the xor_blocks routine to take a an explicit destination address to mirror the hardware. Changelog: * fixed a leftover debug print * don't allow callbacks in async_interrupt_cond * fixed xor_block changes * fixed usage of ASYNC_TX_XOR_DROP_DEST * drop dma mapping methods, suggested by Chris Leech * printk warning fixups from Andrew Morton * don't use inline in C files, Adrian Bunk * select the API when MD is enabled * BUG_ON xor source counts <= 1 * implicitly handle hardware concerns like channel switching and interrupts, Neil Brown * remove the per operation type list, and distribute operation capabilities evenly amongst the available channels * simplify async_tx_find_channel to optimize the fast path * introduce the channel_table_initialized flag to prevent early calls to the api * reorganize the code to mimic crypto * include mm.h as not all archs include it in dma-mapping.h * make the Kconfig options non-user visible, Adrian Bunk * move async_tx under crypto since it is meant as 'core' functionality, and the two may share algorithms in the future * move large inline functions into c files * checkpatch.pl fixes * gpl v2 only correction Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Acked-By: NeilBrown <neilb@suse.de>
* xor: make 'xor_blocks' a library routine for use with async_txDan Williams2007-07-131-0/+6
| | | | | | | | | | | | | | | | | | The async_tx api tries to use a dma engine for an operation, but will fall back to an optimized software routine otherwise. Xor support is implemented using the raid5 xor routines. For organizational purposes this routine is moved to a common area. The following fixes are also made: * rename xor_block => xor_blocks, suggested by Adrian Bunk * ensure that xor.o initializes before md.o in the built-in case * checkpatch.pl fixes * mark calibrate_xor_blocks __init, Adrian Bunk Cc: Adrian Bunk <bunk@stusta.de> Cc: NeilBrown <neilb@suse.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* [CRYPTO] cryptd: Add software async crypto daemonHerbert Xu2007-05-021-0/+1
| | | | | | | | This patch adds the cryptd module which is a template that takes a synchronous software crypto algorithm and converts it to an asynchronous one by executing it in a kernel thread. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] api: Add async blkcipher typeHerbert Xu2007-05-021-0/+1
| | | | | | | | This patch adds the mid-level interface for asynchronous block ciphers. It also includes a generic queueing mechanism that can be used by other asynchronous crypto operations in future. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] camellia: added the code of Camellia cipher algorithm.Noriaki TAKAMIYA2007-02-071-0/+1
| | | | | | | This patch adds the main code of Camellia cipher algorithm. Signed-off-by: Noriaki TAKAMIYA <takamiya@po.ntts.co.jp> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] fcrypt: Add FCrypt from RxRPCDavid Howells2007-02-071-0/+1
| | | | | | | Add a crypto module to provide FCrypt encryption as used by RxRPC. Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] pcbc: Add Propagated CBC templateDavid Howells2007-02-071-0/+1
| | | | | | | Add PCBC crypto template support as used by RxRPC. Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher modeRik Snel2006-12-061-0/+1
| | | | | | | | | | | | | | | | | | Main module, this implements the Liskov Rivest Wagner block cipher mode in the new blockcipher API. The implementation is based on ecb.c. The LRW-32-AES specification I used can be found at: http://grouper.ieee.org/groups/1619/email/pdf00017.pdf It implements the optimization specified as optional in the specification, and in addition it uses optimized multiplication routines from gf128mul.c. Since gf128mul.[ch] is not tested on bigendian, this cipher mode may currently fail badly on bigendian machines. Signed-off-by: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* [CRYPTO] lib: table driven multiplications in GF(2^128)Rik Snel2006-12-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A lot of cypher modes need multiplications in GF(2^128). LRW, ABL, GCM... I use functions from this library in my LRW implementation and I will also use them in my ABL (Arbitrary Block Length, an unencumbered (correct me if I am wrong, wide block cipher mode). Elements of GF(2^128) must be presented as u128 *, it encourages automatic and proper alignment. The library contains support for two different representations of GF(2^128), see the comment in gf128mul.h. There different levels of optimization (memory/speed tradeoff). The code is based on work by Dr Brian Gladman. Notable changes: - deletion of two optimization modes - change from u32 to u64 for faster handling on 64bit machines - support for 'bbe' representation in addition to the, already implemented, 'lle' representation. - move 'inline void' functions from header to 'static void' in the source file - update to use the linux coding style conventions The original can be found at: http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip The copyright (and GPL statement) of the original author is preserved. Signed-off-by: Rik Snel <rsnel@cube.dyndns.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>