summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* powerpc/crypto: add 842 crypto driverSeth Jennings2012-08-013-0/+193
| | | | | | | | | | | | | | | | | | This patch add the 842 cryptographic API driver that submits compression requests to the 842 hardware compression accelerator driver (nx-compress). If the hardware accelerator goes offline for any reason (dynamic disable, migration, etc...), this driver will use LZO as a software failover for all future compression requests. For decompression requests, the 842 hardware driver contains a software implementation of the 842 decompressor to support the decompression of data that was compressed before the accelerator went offline. Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* powerpc/crypto: add 842 hardware compression driverSeth Jennings2012-08-015-0/+1644
| | | | | | | | | | | | | | | | | This patch adds the driver for interacting with the 842 compression accelerator on IBM Power7+ systems. The device is a child of the Platform Facilities Option (PFO) and shows up as a child of the IBM VIO bus. The compression/decompression API takes the same arguments as existing compression methods like lzo and deflate. The 842 hardware operates on 4K hardware pages and the driver breaks up input on 4K boundaries to submit it to the hardware accelerator. Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* powerpc/crypto: add compression support to arch vecSeth Jennings2012-08-011-2/+2
| | | | | | | | | This patch enables compression engine support in the architecture vector. This causes the Power hypervisor to allow access to the nx comrpession accelerator. Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* powerpc/crypto: rework KconfigSeth Jennings2012-08-015-16/+29
| | | | | | | | | | | | | This patch creates a new submenu for the NX cryptographic hardware accelerator and breaks the NX options into their own Kconfig file under drivers/crypto/nx/Kconfig. This will permit additional NX functionality to be easily and more cleanly added in the future without touching drivers/crypto/Makefile|Kconfig. Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: caam - set descriptor sharing type to SERIALKim Phillips2012-08-013-7/+7
| | | | | | | | | | | | | | | SHARE_WAIT, whilst more optimal for association-less crypto, has the ability to start thrashing the CCB descriptor/key caches, given high levels of traffic across multiple security associations (and thus keys). Switch to using the SERIAL sharing type, which prefers the last used CCB for the SA. On a 2-DECO platform such as the P3041, this can improve performance by about 3.7%. Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: caam - add backward compatible string sec4.0Shengzhou Liu2012-08-012-6/+15
| | | | | | | | | | | | | | | In some device trees of previous version, there were string "fsl,sec4.0". To be backward compatible with device trees, we first check "fsl,sec-v4.0", if it fails, then check for "fsl,sec4.0". Signed-off-by: Shengzhou Liu <Shengzhou.Liu@freescale.com> extended to include new hash and rng code, which was omitted from the previous version of this patch during a rebase of the SDK version. Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: caam - fix possible deadlock conditionKim Phillips2012-08-011-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit "crypto: caam - use non-irq versions of spinlocks for job rings" made two bad assumptions: (a) The caam_jr_enqueue lock isn't used in softirq context. Not true: jr_enqueue can be interrupted by an incoming net interrupt and the received packet may be sent for encryption, via caam_jr_enqueue in softirq context, thereby inducing a deadlock. This is evidenced when running netperf over an IPSec tunnel between two P4080's, with spinlock debugging turned on: [ 892.092569] BUG: spinlock lockup on CPU#7, netperf/10634, e8bf5f70 [ 892.098747] Call Trace: [ 892.101197] [eff9fc10] [c00084c0] show_stack+0x48/0x15c (unreliable) [ 892.107563] [eff9fc50] [c0239c2c] do_raw_spin_lock+0x16c/0x174 [ 892.113399] [eff9fc80] [c0596494] _raw_spin_lock+0x3c/0x50 [ 892.118889] [eff9fc90] [c0445e74] caam_jr_enqueue+0xf8/0x250 [ 892.124550] [eff9fcd0] [c044a644] aead_decrypt+0x6c/0xc8 [ 892.129625] BUG: spinlock lockup on CPU#5, swapper/5/0, e8bf5f70 [ 892.129629] Call Trace: [ 892.129637] [effa7c10] [c00084c0] show_stack+0x48/0x15c (unreliable) [ 892.129645] [effa7c50] [c0239c2c] do_raw_spin_lock+0x16c/0x174 [ 892.129652] [effa7c80] [c0596494] _raw_spin_lock+0x3c/0x50 [ 892.129660] [effa7c90] [c0445e74] caam_jr_enqueue+0xf8/0x250 [ 892.129666] [effa7cd0] [c044a644] aead_decrypt+0x6c/0xc8 [ 892.129674] [effa7d00] [c0509724] esp_input+0x178/0x334 [ 892.129681] [effa7d50] [c0519778] xfrm_input+0x77c/0x818 [ 892.129688] [effa7da0] [c050e344] xfrm4_rcv_encap+0x20/0x30 [ 892.129697] [effa7db0] [c04b90c8] ip_local_deliver+0x190/0x408 [ 892.129703] [effa7de0] [c04b966c] ip_rcv+0x32c/0x898 [ 892.129709] [effa7e10] [c048b998] __netif_receive_skb+0x27c/0x4e8 [ 892.129715] [effa7e80] [c048d744] netif_receive_skb+0x4c/0x13c [ 892.129726] [effa7eb0] [c03c28ac] _dpa_rx+0x1a8/0x354 [ 892.129732] [effa7ef0] [c03c2ac4] ingress_rx_default_dqrr+0x6c/0x108 [ 892.129742] [effa7f10] [c0467ae0] qman_poll_dqrr+0x170/0x1d4 [ 892.129748] [effa7f40] [c03c153c] dpaa_eth_poll+0x20/0x94 [ 892.129754] [effa7f60] [c048dbd0] net_rx_action+0x13c/0x1f4 [ 892.129763] [effa7fa0] [c003d1b8] __do_softirq+0x108/0x1b0 [ 892.129769] [effa7ff0] [c000df58] call_do_softirq+0x14/0x24 [ 892.129775] [ebacfe70] [c0004868] do_softirq+0xd8/0x104 [ 892.129780] [ebacfe90] [c003d5a4] irq_exit+0xb8/0xd8 [ 892.129786] [ebacfea0] [c0004498] do_IRQ+0xa4/0x1b0 [ 892.129792] [ebacfed0] [c000fad8] ret_from_except+0x0/0x18 [ 892.129798] [ebacff90] [c0009010] cpu_idle+0x94/0xf0 [ 892.129804] [ebacffb0] [c059ff88] start_secondary+0x42c/0x430 [ 892.129809] [ebacfff0] [c0001e28] __secondary_start+0x30/0x84 [ 892.281474] [ 892.282959] [eff9fd00] [c0509724] esp_input+0x178/0x334 [ 892.288186] [eff9fd50] [c0519778] xfrm_input+0x77c/0x818 [ 892.293499] [eff9fda0] [c050e344] xfrm4_rcv_encap+0x20/0x30 [ 892.299074] [eff9fdb0] [c04b90c8] ip_local_deliver+0x190/0x408 [ 892.304907] [eff9fde0] [c04b966c] ip_rcv+0x32c/0x898 [ 892.309872] [eff9fe10] [c048b998] __netif_receive_skb+0x27c/0x4e8 [ 892.315966] [eff9fe80] [c048d744] netif_receive_skb+0x4c/0x13c [ 892.321803] [eff9feb0] [c03c28ac] _dpa_rx+0x1a8/0x354 [ 892.326855] [eff9fef0] [c03c2ac4] ingress_rx_default_dqrr+0x6c/0x108 [ 892.333212] [eff9ff10] [c0467ae0] qman_poll_dqrr+0x170/0x1d4 [ 892.338872] [eff9ff40] [c03c153c] dpaa_eth_poll+0x20/0x94 [ 892.344271] [eff9ff60] [c048dbd0] net_rx_action+0x13c/0x1f4 [ 892.349846] [eff9ffa0] [c003d1b8] __do_softirq+0x108/0x1b0 [ 892.355338] [eff9fff0] [c000df58] call_do_softirq+0x14/0x24 [ 892.360910] [e7169950] [c0004868] do_softirq+0xd8/0x104 [ 892.366135] [e7169970] [c003d5a4] irq_exit+0xb8/0xd8 [ 892.371101] [e7169980] [c0004498] do_IRQ+0xa4/0x1b0 [ 892.375979] [e71699b0] [c000fad8] ret_from_except+0x0/0x18 [ 892.381466] [e7169a70] [c0445e74] caam_jr_enqueue+0xf8/0x250 [ 892.387127] [e7169ab0] [c044ad4c] aead_givencrypt+0x6ac/0xa70 [ 892.392873] [e7169b20] [c050a0b8] esp_output+0x2b4/0x570 [ 892.398186] [e7169b80] [c0519b9c] xfrm_output_resume+0x248/0x7c0 [ 892.404194] [e7169bb0] [c050e89c] xfrm4_output_finish+0x18/0x28 [ 892.410113] [e7169bc0] [c050e8f4] xfrm4_output+0x48/0x98 [ 892.415427] [e7169bd0] [c04beac0] ip_local_out+0x48/0x98 [ 892.420740] [e7169be0] [c04bec7c] ip_queue_xmit+0x16c/0x490 [ 892.426314] [e7169c10] [c04d6128] tcp_transmit_skb+0x35c/0x9a4 [ 892.432147] [e7169c70] [c04d6f98] tcp_write_xmit+0x200/0xa04 [ 892.437808] [e7169cc0] [c04c8ccc] tcp_sendmsg+0x994/0xcec [ 892.443213] [e7169d40] [c04eebfc] inet_sendmsg+0xd0/0x164 [ 892.448617] [e7169d70] [c04792f8] sock_sendmsg+0x8c/0xbc [ 892.453931] [e7169e40] [c047aecc] sys_sendto+0xc0/0xfc [ 892.459069] [e7169f10] [c047b934] sys_socketcall+0x110/0x25c [ 892.464729] [e7169f40] [c000f480] ret_from_syscall+0x0/0x3c (b) since the caam_jr_dequeue lock is only used in bh context, then semantically it should use _bh spin_lock types. spin_lock_bh semantics are to disable back-halves, and used when a lock is shared between softirq (bh) context and process and/or h/w IRQ context. Since the lock is only used within softirq context, and this tasklet is atomic, there is no need to do the additional work to disable back halves. This patch adds back-half disabling protection to caam_jr_enqueue spin_locks to fix (a), and drops it from caam_jr_dequeue to fix (b). Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast6 - add x86_64/avx assembler implementationJohannes Goetzfried2012-08-015-0/+1062
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a x86_64/avx assembler implementation of the Cast6 block cipher. The implementation processes eight blocks in parallel (two 4 block chunk AVX operations). The table-lookups are done in general-purpose registers. For small blocksizes the functions from the generic module are called. A good performance increase is provided for blocksizes greater or equal to 128B. Patch has been tested with tcrypt and automated filesystem tests. Tcrypt benchmark results: Intel Core i5-2500 CPU (fam:6, model:42, step:7) cast6-avx-x86_64 vs. cast6-generic 128bit key: (lrw:256bit) (xts:256bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 0.97x 1.00x 1.01x 1.01x 0.99x 0.97x 0.98x 1.01x 0.96x 0.98x 64B 0.98x 0.99x 1.02x 1.01x 0.99x 1.00x 1.01x 0.99x 1.00x 0.99x 256B 1.77x 1.84x 0.99x 1.85x 1.77x 1.77x 1.70x 1.74x 1.69x 1.72x 1024B 1.93x 1.95x 0.99x 1.96x 1.93x 1.93x 1.84x 1.85x 1.89x 1.87x 8192B 1.91x 1.95x 0.99x 1.97x 1.95x 1.91x 1.86x 1.87x 1.93x 1.90x 256bit key: (lrw:384bit) (xts:512bit) size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec 16B 0.97x 0.99x 1.02x 1.01x 0.98x 0.99x 1.00x 1.00x 0.98x 0.98x 64B 0.98x 0.99x 1.01x 1.00x 1.00x 1.00x 1.01x 1.01x 0.97x 1.00x 256B 1.77x 1.83x 1.00x 1.86x 1.79x 1.78x 1.70x 1.76x 1.71x 1.69x 1024B 1.92x 1.95x 0.99x 1.96x 1.93x 1.93x 1.83x 1.86x 1.89x 1.87x 8192B 1.94x 1.95x 0.99x 1.97x 1.95x 1.95x 1.87x 1.87x 1.93x 1.91x Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - add larger cast6 testvectorsJohannes Goetzfried2012-08-013-2/+1520
| | | | | | | | | New ECB, CBC, CTR, LRW and XTS testvectors for cast6. We need larger testvectors to check parallel code paths in the optimized implementation. Tests have also been added to the tcrypt module. Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast6 - prepare generic module for optimized implementationsJohannes Goetzfried2012-08-013-24/+67
| | | | | | | | | Rename cast6 module to cast6_generic to allow autoloading of optimized implementations. Generic functions and s-boxes are exported to be able to use them within optimized implementations. Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast5 - add x86_64/avx assembler implementationJohannes Goetzfried2012-08-015-0/+928
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a x86_64/avx assembler implementation of the Cast5 block cipher. The implementation processes sixteen blocks in parallel (four 4 block chunk AVX operations). The table-lookups are done in general-purpose registers. For small blocksizes the functions from the generic module are called. A good performance increase is provided for blocksizes greater or equal to 128B. Patch has been tested with tcrypt and automated filesystem tests. Tcrypt benchmark results: Intel Core i5-2500 CPU (fam:6, model:42, step:7) cast5-avx-x86_64 vs. cast5-generic 64bit key: size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec 16B 0.99x 0.99x 1.00x 1.00x 1.02x 1.01x 64B 1.00x 1.00x 0.98x 1.00x 1.01x 1.02x 256B 2.03x 2.01x 0.95x 2.11x 2.12x 2.13x 1024B 2.30x 2.24x 0.95x 2.29x 2.35x 2.35x 8192B 2.31x 2.27x 0.95x 2.31x 2.39x 2.39x 128bit key: size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec 16B 0.99x 0.99x 1.00x 1.00x 1.01x 1.01x 64B 1.00x 1.00x 0.98x 1.01x 1.02x 1.01x 256B 2.17x 2.13x 0.96x 2.19x 2.19x 2.19x 1024B 2.29x 2.32x 0.95x 2.34x 2.37x 2.38x 8192B 2.35x 2.32x 0.95x 2.35x 2.39x 2.39x Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - add larger cast5 testvectorsJohannes Goetzfried2012-08-014-2/+871
| | | | | | | | | New ECB, CBC and CTR testvectors for cast5. We need larger testvectors to check parallel code paths in the optimized implementation. Tests have also been added to the tcrypt module. Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cast5 - prepare generic module for optimized implementationsJohannes Goetzfried2012-08-013-34/+69
| | | | | | | | | Rename cast5 module to cast5_generic to allow autoloading of optimized implementations. Generic functions and s-boxes are exported to be able to use them within optimized implementations. Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arch/s390 - cleanup - remove unneeded cra_list initializationJussi Kivilinna2012-08-013-16/+0
| | | | | | | | | | | | | | Initialization of cra_list is currently mixed, most ciphers initialize this field and most shashes do not. Initialization however is not needed at all since cra_list is initialized/overwritten in __crypto_register_alg() with list_add(). Therefore perform cleanup to remove all unneeded initializations of this field in 'arch/s390/crypto/' Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com> Cc: linux-s390@vger.kernel.org Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: drivers - remove cra_list initializationJussi Kivilinna2012-08-0112-21/+0
| | | | | | | | | | | | | | | | | | | Initialization of cra_list is currently mixed, most ciphers initialize this field and most shashes do not. Initialization however is not needed at all since cra_list is initialized/overwritten in __crypto_register_alg() with list_add(). Therefore perform cleanup to remove all unneeded initializations of this field in 'crypto/drivers/'. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linux-geode@lists.infradead.org Cc: Michal Ludvig <michal@logix.cz> Cc: Dmitry Kasatkin <dmitry.kasatkin@nokia.com> Cc: Varun Wadekar <vwadekar@nvidia.com> Cc: Eric Bénard <eric@eukrea.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Acked-by: Kent Yoder <key@linux.vnet.ibm.com> Acked-by: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: arch/x86 - cleanup - remove unneeded crypto_alg.cra_list initializationsJussi Kivilinna2012-08-0111-54/+1
| | | | | | | | | | | Initialization of cra_list is currently mixed, most ciphers initialize this field and most shashes do not. Initialization however is not needed at all since cra_list is initialized/overwritten in __crypto_register_alg() with list_add(). Therefore perform cleanup to remove all unneeded initializations of this field in 'arch/x86/crypto/'. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cleanup - remove unneeded crypto_alg.cra_list initializationsJussi Kivilinna2012-08-0115-15/+0
| | | | | | | | | | | Initialization of cra_list is currently mixed, most ciphers initialize this field and most shashes do not. Initialization however is not needed at all since cra_list is initialized/overwritten in __crypto_register_alg() with list_add(). Therefore perform cleanup to remove all unneeded initializations of this field in 'crypto/'. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: whirlpool - use crypto_[un]register_shashesJussi Kivilinna2012-08-011-33/+6
| | | | | | | | Combine all shash algs to be registered and use new crypto_[un]register_shashes functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha512 - use crypto_[un]register_shashesJussi Kivilinna2012-08-011-15/+5
| | | | | | | | Combine all shash algs to be registered and use new crypto_[un]register_shashes functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha256 - use crypto_[un]register_shashesJussi Kivilinna2012-08-011-20/+5
| | | | | | | | Combine all shash algs to be registered and use new crypto_[un]register_shashes functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tiger - use crypto_[un]register_shashesJussi Kivilinna2012-08-011-32/+6
| | | | | | | | Combine all shash algs to be registered and use new crypto_[un]register_shashes functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: add crypto_[un]register_shashes for [un]registering multiple shash ↵Jussi Kivilinna2012-08-012-0/+38
| | | | | | | | | | entries at once Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash crypto modules that register multiple algorithms. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ansi_cprng - use crypto_[un]register_algsJussi Kivilinna2012-08-011-40/+23
| | | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Cc: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: serpent - use crypto_[un]register_algsJussi Kivilinna2012-08-011-34/+19
| | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: des - use crypto_[un]register_algsJussi Kivilinna2012-08-011-20/+5
| | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: crypto_null - use crypto_[un]register_algsJussi Kivilinna2012-08-011-39/+18
| | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tea - use crypto_[un]register_algsJussi Kivilinna2012-08-011-35/+6
| | | | | | | | Combine all crypto_alg to be registered and use new crypto_[un]register_algs functions. This simplifies init/exit code. Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6Linus Torvalds2012-07-317-169/+248
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull irqdomain changes from Grant Likely: "Round of refactoring and enhancements to irq_domain infrastructure. This series starts the process of simplifying irqdomain. The ultimate goal is to merge LEGACY, LINEAR and TREE mappings into a single system, but had to back off from that after some last minute bugs. Instead it mainly reorganizes the code and ensures that the reverse map gets populated when the irq is mapped instead of the first time it is looked up. Merging of the irq_domain types is deferred to v3.7 In other news, this series adds helpers for creating static mappings on a linear or tree mapping." * tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6: irqdomain: Improve diagnostics when a domain mapping fails irqdomain: eliminate slow-path revmap lookups irqdomain: Fix irq_create_direct_mapping() to test irq_domain type. irqdomain: Eliminate dedicated radix lookup functions irqdomain: Support for static IRQ mapping and association. irqdomain: Always update revmap when setting up a virq irqdomain: Split disassociating code into separate function irq_domain: correct a minor wrong comment for linear revmap irq_domain: Standardise legacy/linear domain selection irqdomain: Make ops->map hook optional irqdomain: Remove unnecessary test for IRQ_DOMAIN_MAP_LEGACY irqdomain: Simple NUMA awareness. devicetree: add helper inline for retrieving a node's full name
| * irqdomain: Improve diagnostics when a domain mapping failsMark Brown2012-07-241-6/+11
| | | | | | | | | | | | | | | | | | When the map operation fails log the error code we get and add a WARN_ON() so we get a backtrace (which should help work out which interrupt is the source of the issue). Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| * irqdomain: eliminate slow-path revmap lookupsGrant Likely2012-07-241-40/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the current state of irq_domain, the reverse map is always updated when new IRQs get mapped. This means that the irq_find_mapping() function can be simplified to execute the revmap lookup functions unconditionally This patch adds lookup functions for the revmaps that don't yet have one and removes the slow path lookup code path. v8: Broke out unrelated changes into separate patches. Rebased on Paul's irq association patches. v7: Rebased to irqdomain/next for v3.4 and applied before the removal of 'hint' v6: Remove the slow path entirely. The only place where the slow path could get called is for a linear mapping if the hwirq number is larger than the linear revmap size. There shouldn't be any interrupt controllers that do that. v5: rewrite to not use a ->revmap() callback. It is simpler, smaller, safer and faster to open code each of the revmap lookups directly into irq_find_mapping() via a switch statement. v4: Fix build failure on incorrect variable reference. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Milton Miller <miltonm@bga.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Rob Herring <rob.herring@calxeda.com>
| * Merge remote-tracking branch 'origin' into irqdomain/nextGrant Likely2012-07-244548-104854/+181547
| |\
| * | irqdomain: Fix irq_create_direct_mapping() to test irq_domain type.Grant Likely2012-07-111-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | irq_create_direct_mapping can only be used with the NOMAP type. Make the function test to ensure it is passed the correct type of irq_domain. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | irqdomain: Eliminate dedicated radix lookup functionsGrant Likely2012-07-114-65/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation to remove the slow revmap path, eliminate the public radix revmap lookup functions. This simplifies the code and makes the slowpath removal patch a lot simpler. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | irqdomain: Support for static IRQ mapping and association.Grant Likely2012-07-112-25/+107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a new strict mapping API for supporting creation of linux IRQs at existing positions within the domain. The new routines are as follows: For dynamic allocation and insertion to specified ranges: - irq_create_identity_mapping() - irq_create_strict_mappings() These will allocate and associate a range of linux IRQs at the specified location. This can be used by controllers that have their own static linux IRQ definitions to map a hwirq range to, as well as for platforms that wish to establish 1:1 identity mapping between linux and hwirq space. For insertion to specified ranges by platforms that do their own irq_desc management: - irq_domain_associate() - irq_domain_associate_many() These in turn call back in to the domain's ->map() routine, for further processing by the platform. Disassociation of IRQs get handled through irq_dispose_mapping() as normal. With these in place it should be possible to begin migration of legacy IRQ domains to linear ones, without requiring special handling for static vs dynamic IRQ definitions in DT vs non-DT paths. This also makes it possible for domains with static mappings to adopt whichever tree model best fits their needs, rather than simply restricting them to linear revmaps. Signed-off-by: Paul Mundt <lethal@linux-sh.org> [grant.likely: Reorganized irq_domain_associate{,_many} to have all logic in one place] [grant.likely: Add error checking for unallocated irq_descs at associate time] Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | irqdomain: Always update revmap when setting up a virqGrant Likely2012-07-112-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At irq_setup_virq() time all of the data needed to update the reverse map is available, but the current code ignores it and relies upon the slow path to insert revmap records. This patch adds revmap updating to the setup path so the slow path will no longer be necessary. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | irqdomain: Split disassociating code into separate functionGrant Likely2012-07-111-28/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch moves the irq disassociation code out into a separate function in preparation to extend irq_setup_virq to handle multiple irqs and rename it for use by interrupt controller drivers. The new function will be used by irq_setup_virq() in its error path. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | Merge tag 'v3.5-rc6' into irqdomain/nextGrant Likely2012-07-11995-5037/+9700
| |\ \ | | | | | | | | | | | | Linux 3.5-rc6
| * | | irq_domain: correct a minor wrong comment for linear revmapDong Aisheng2012-07-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The revmap type should be linear for irq_domain_add_linear function. Signed-off-by: Dong Aisheng <dong.aisheng@linaro.org> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| * | | irq_domain: Standardise legacy/linear domain selectionMark Brown2012-07-113-0/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A large proportion of interrupt controllers that support legacy mappings do so because non-DT systems need to use fixed IRQ numbers when registering devices via buses but can otherwise use a linear mapping. The interrupt controller itself typically is not affected by the mapping used and best practice is to use a linear mapping where possible so drivers frequently select at runtime depending on if a legacy range has been allocated to them. Standardise this behaviour by providing irq_domain_register_simple() which will allocate a linear mapping unless a positive first_irq is provided in which case it will fall back to a legacy mapping. This helps make best practice for irq_domain adoption clearer. Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| * | | irqdomain: Make ops->map hook optionalGrant Likely2012-06-171-10/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There isn't a really compelling reason to force ->map to be populated, so allow it to be left unset. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | | irqdomain: Remove unnecessary test for IRQ_DOMAIN_MAP_LEGACYGrant Likely2012-06-151-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Where irq_domain_associate() is called in irq_create_mapping, there is no need to test for IRQ_DOMAIN_MAP_LEGACY because it is already tested for earlier in the routine. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com>
| * | | irqdomain: Simple NUMA awareness.Paul Mundt2012-06-152-10/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While common irqdesc allocation is node aware, the irqdomain code is not. Presently we observe a number of regressions/inconsistencies on NUMA-capable platforms: - Platforms using irqdomains with legacy mappings, where the irq_descs are allocated node-local and the irqdomain data structure is not. - Drivers implementing irqdomains will lose node locality regardless of the underlying struct device's node id. This plugs in NUMA node id proliferation across the various allocation callsites by way of_node_to_nid() node lookup. While of_node_to_nid() does the right thing for OF-capable platforms it doesn't presently handle the non-DT case. This is trivially dealt with by simply wraping in to numa_node_id() unconditionally. Signed-off-by: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
| * | | devicetree: add helper inline for retrieving a node's full nameGrant Likely2012-06-1510-21/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pattern (np ? np->full_name : "<none>") is rather common in the kernel, but can also make for quite long lines. This patch adds a new inline function, of_node_full_name() so that the test for a valid node pointer doesn't need to be open coded at all call sites. Signed-off-by: Grant Likely <grant.likely@secretlab.ca> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de>
* | | | Merge branch 'akpm' (Andrew's patch-bomb)Linus Torvalds2012-07-31131-1060/+3174
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge Andrew's second set of patches: - MM - a few random fixes - a couple of RTC leftovers * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (120 commits) rtc/rtc-88pm80x: remove unneed devm_kfree rtc/rtc-88pm80x: assign ret only when rtc_register_driver fails mm: hugetlbfs: close race during teardown of hugetlbfs shared page tables tmpfs: distribute interleave better across nodes mm: remove redundant initialization mm: warn if pg_data_t isn't initialized with zero mips: zero out pg_data_t when it's allocated memcg: gix memory accounting scalability in shrink_page_list mm/sparse: remove index_init_lock mm/sparse: more checks on mem_section number mm/sparse: optimize sparse_index_alloc memcg: add mem_cgroup_from_css() helper memcg: further prevent OOM with too many dirty pages memcg: prevent OOM with too many dirty pages mm: mmu_notifier: fix freed page still mapped in secondary MMU mm: memcg: only check anon swapin page charges for swap cache mm: memcg: only check swap cache pages for repeated charging mm: memcg: split swapin charge function into private and public part mm: memcg: remove needless !mm fixup to init_mm when charging mm: memcg: remove unneeded shmem charge type ...
| * | | | rtc/rtc-88pm80x: remove unneed devm_kfreeDevendra Naga2012-07-311-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | devm_kzalloc() doesn't need a matching devm_kfree(), the freeing mechanism will trigger when driver unloads. Signed-off-by: Devendra Naga <devendra.aaru@gmail.com> Cc: Alessandro Zummo <a.zummo@towertech.it> Cc: Ashish Jangam <ashish.jangam@kpitcummins.com> Cc: David Dajun Chen <dchen@diasemi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | rtc/rtc-88pm80x: assign ret only when rtc_register_driver failsDevendra Naga2012-07-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At the probe we are assigning ret to return value of PTR_ERR right after the rtc_register_drive()r, as we would have done it in the if (IS_ERR(ptr)) check, since the function fails and goes inside that case Signed-off-by: Devendra Naga <devendra.aaru@gmail.com> Cc: Alessandro Zummo <a.zummo@towertech.it> Cc: Ashish Jangam <ashish.jangam@kpitcummins.com> Cc: David Dajun Chen <dchen@diasemi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | mm: hugetlbfs: close race during teardown of hugetlbfs shared page tablesMel Gorman2012-07-313-3/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a process creates a large hugetlbfs mapping that is eligible for page table sharing and forks heavily with children some of whom fault and others which destroy the mapping then it is possible for page tables to get corrupted. Some teardowns of the mapping encounter a "bad pmd" and output a message to the kernel log. The final teardown will trigger a BUG_ON in mm/filemap.c. This was reproduced in 3.4 but is known to have existed for a long time and goes back at least as far as 2.6.37. It was probably was introduced in 2.6.20 by [39dde65c: shared page table for hugetlb page]. The messages look like this; [ ..........] Lots of bad pmd messages followed by this [ 127.164256] mm/memory.c:391: bad pmd ffff880412e04fe8(80000003de4000e7). [ 127.164257] mm/memory.c:391: bad pmd ffff880412e04ff0(80000003de6000e7). [ 127.164258] mm/memory.c:391: bad pmd ffff880412e04ff8(80000003de0000e7). [ 127.186778] ------------[ cut here ]------------ [ 127.186781] kernel BUG at mm/filemap.c:134! [ 127.186782] invalid opcode: 0000 [#1] SMP [ 127.186783] CPU 7 [ 127.186784] Modules linked in: af_packet cpufreq_conservative cpufreq_userspace cpufreq_powersave acpi_cpufreq mperf ext3 jbd dm_mod coretemp crc32c_intel usb_storage ghash_clmulni_intel aesni_intel i2c_i801 r8169 mii uas sr_mod cdrom sg iTCO_wdt iTCO_vendor_support shpchp serio_raw cryptd aes_x86_64 e1000e pci_hotplug dcdbas aes_generic container microcode ext4 mbcache jbd2 crc16 sd_mod crc_t10dif i915 drm_kms_helper drm i2c_algo_bit ehci_hcd ahci libahci usbcore rtc_cmos usb_common button i2c_core intel_agp video intel_gtt fan processor thermal thermal_sys hwmon ata_generic pata_atiixp libata scsi_mod [ 127.186801] [ 127.186802] Pid: 9017, comm: hugetlbfs-test Not tainted 3.4.0-autobuild #53 Dell Inc. OptiPlex 990/06D7TR [ 127.186804] RIP: 0010:[<ffffffff810ed6ce>] [<ffffffff810ed6ce>] __delete_from_page_cache+0x15e/0x160 [ 127.186809] RSP: 0000:ffff8804144b5c08 EFLAGS: 00010002 [ 127.186810] RAX: 0000000000000001 RBX: ffffea000a5c9000 RCX: 00000000ffffffc0 [ 127.186811] RDX: 0000000000000000 RSI: 0000000000000009 RDI: ffff88042dfdad00 [ 127.186812] RBP: ffff8804144b5c18 R08: 0000000000000009 R09: 0000000000000003 [ 127.186813] R10: 0000000000000000 R11: 000000000000002d R12: ffff880412ff83d8 [ 127.186814] R13: ffff880412ff83d8 R14: 0000000000000000 R15: ffff880412ff83d8 [ 127.186815] FS: 00007fe18ed2c700(0000) GS:ffff88042dce0000(0000) knlGS:0000000000000000 [ 127.186816] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b [ 127.186817] CR2: 00007fe340000503 CR3: 0000000417a14000 CR4: 00000000000407e0 [ 127.186818] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 127.186819] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 127.186820] Process hugetlbfs-test (pid: 9017, threadinfo ffff8804144b4000, task ffff880417f803c0) [ 127.186821] Stack: [ 127.186822] ffffea000a5c9000 0000000000000000 ffff8804144b5c48 ffffffff810ed83b [ 127.186824] ffff8804144b5c48 000000000000138a 0000000000001387 ffff8804144b5c98 [ 127.186825] ffff8804144b5d48 ffffffff811bc925 ffff8804144b5cb8 0000000000000000 [ 127.186827] Call Trace: [ 127.186829] [<ffffffff810ed83b>] delete_from_page_cache+0x3b/0x80 [ 127.186832] [<ffffffff811bc925>] truncate_hugepages+0x115/0x220 [ 127.186834] [<ffffffff811bca43>] hugetlbfs_evict_inode+0x13/0x30 [ 127.186837] [<ffffffff811655c7>] evict+0xa7/0x1b0 [ 127.186839] [<ffffffff811657a3>] iput_final+0xd3/0x1f0 [ 127.186840] [<ffffffff811658f9>] iput+0x39/0x50 [ 127.186842] [<ffffffff81162708>] d_kill+0xf8/0x130 [ 127.186843] [<ffffffff81162812>] dput+0xd2/0x1a0 [ 127.186845] [<ffffffff8114e2d0>] __fput+0x170/0x230 [ 127.186848] [<ffffffff81236e0e>] ? rb_erase+0xce/0x150 [ 127.186849] [<ffffffff8114e3ad>] fput+0x1d/0x30 [ 127.186851] [<ffffffff81117db7>] remove_vma+0x37/0x80 [ 127.186853] [<ffffffff81119182>] do_munmap+0x2d2/0x360 [ 127.186855] [<ffffffff811cc639>] sys_shmdt+0xc9/0x170 [ 127.186857] [<ffffffff81410a39>] system_call_fastpath+0x16/0x1b [ 127.186858] Code: 0f 1f 44 00 00 48 8b 43 08 48 8b 00 48 8b 40 28 8b b0 40 03 00 00 85 f6 0f 88 df fe ff ff 48 89 df e8 e7 cb 05 00 e9 d2 fe ff ff <0f> 0b 55 83 e2 fd 48 89 e5 48 83 ec 30 48 89 5d d8 4c 89 65 e0 [ 127.186868] RIP [<ffffffff810ed6ce>] __delete_from_page_cache+0x15e/0x160 [ 127.186870] RSP <ffff8804144b5c08> [ 127.186871] ---[ end trace 7cbac5d1db69f426 ]--- The bug is a race and not always easy to reproduce. To reproduce it I was doing the following on a single socket I7-based machine with 16G of RAM. $ hugeadm --pool-pages-max DEFAULT:13G $ echo $((18*1048576*1024)) > /proc/sys/kernel/shmmax $ echo $((18*1048576*1024)) > /proc/sys/kernel/shmall $ for i in `seq 1 9000`; do ./hugetlbfs-test; done On my particular machine, it usually triggers within 10 minutes but enabling debug options can change the timing such that it never hits. Once the bug is triggered, the machine is in trouble and needs to be rebooted. The machine will respond but processes accessing proc like "ps aux" will hang due to the BUG_ON. shutdown will also hang and needs a hard reset or a sysrq-b. The basic problem is a race between page table sharing and teardown. For the most part page table sharing depends on i_mmap_mutex. In some cases, it is also taking the mm->page_table_lock for the PTE updates but with shared page tables, it is the i_mmap_mutex that is more important. Unfortunately it appears to be also insufficient. Consider the following situation Process A Process B --------- --------- hugetlb_fault shmdt LockWrite(mmap_sem) do_munmap unmap_region unmap_vmas unmap_single_vma unmap_hugepage_range Lock(i_mmap_mutex) Lock(mm->page_table_lock) huge_pmd_unshare/unmap tables <--- (1) Unlock(mm->page_table_lock) Unlock(i_mmap_mutex) huge_pte_alloc ... Lock(i_mmap_mutex) ... vma_prio_walk, find svma, spte ... Lock(mm->page_table_lock) ... share spte ... Unlock(mm->page_table_lock) ... Unlock(i_mmap_mutex) ... hugetlb_no_page <--- (2) free_pgtables unlink_file_vma hugetlb_free_pgd_range remove_vma_list In this scenario, it is possible for Process A to share page tables with Process B that is trying to tear them down. The i_mmap_mutex on its own does not prevent Process A walking Process B's page tables. At (1) above, the page tables are not shared yet so it unmaps the PMDs. Process A sets up page table sharing and at (2) faults a new entry. Process B then trips up on it in free_pgtables. This patch fixes the problem by adding a new function __unmap_hugepage_range_final that is only called when the VMA is about to be destroyed. This function clears VM_MAYSHARE during unmap_hugepage_range() under the i_mmap_mutex. This makes the VMA ineligible for sharing and avoids the race. Superficially this looks like it would then be vunerable to truncate and madvise issues but hugetlbfs has its own truncate handlers so does not use unmap_mapping_range() and does not support madvise(DONTNEED). This should be treated as a -stable candidate if it is merged. Test program is as follows. The test case was mostly written by Michal Hocko with a few minor changes to reproduce this bug. ==== CUT HERE ==== static size_t huge_page_size = (2UL << 20); static size_t nr_huge_page_A = 512; static size_t nr_huge_page_B = 5632; unsigned int get_random(unsigned int max) { struct timeval tv; gettimeofday(&tv, NULL); srandom(tv.tv_usec); return random() % max; } static void play(void *addr, size_t size) { unsigned char *start = addr, *end = start + size, *a; start += get_random(size/2); /* we could itterate on huge pages but let's give it more time. */ for (a = start; a < end; a += 4096) *a = 0; } int main(int argc, char **argv) { key_t key = IPC_PRIVATE; size_t sizeA = nr_huge_page_A * huge_page_size; size_t sizeB = nr_huge_page_B * huge_page_size; int shmidA, shmidB; void *addrA = NULL, *addrB = NULL; int nr_children = 300, n = 0; if ((shmidA = shmget(key, sizeA, IPC_CREAT|SHM_HUGETLB|0660)) == -1) { perror("shmget:"); return 1; } if ((addrA = shmat(shmidA, addrA, SHM_R|SHM_W)) == (void *)-1UL) { perror("shmat"); return 1; } if ((shmidB = shmget(key, sizeB, IPC_CREAT|SHM_HUGETLB|0660)) == -1) { perror("shmget:"); return 1; } if ((addrB = shmat(shmidB, addrB, SHM_R|SHM_W)) == (void *)-1UL) { perror("shmat"); return 1; } fork_child: switch(fork()) { case 0: switch (n%3) { case 0: play(addrA, sizeA); break; case 1: play(addrB, sizeB); break; case 2: break; } break; case -1: perror("fork:"); break; default: if (++n < nr_children) goto fork_child; play(addrA, sizeA); break; } shmdt(addrA); shmdt(addrB); do { wait(NULL); } while (--n > 0); shmctl(shmidA, IPC_RMID, NULL); shmctl(shmidB, IPC_RMID, NULL); return 0; } [akpm@linux-foundation.org: name the declaration's args, fix CONFIG_HUGETLBFS=n build] Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Michal Hocko <mhocko@suse.cz> Signed-off-by: Mel Gorman <mgorman@suse.de> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | tmpfs: distribute interleave better across nodesNathan Zimmer2012-07-311-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When tmpfs has the interleave memory policy, it always starts allocating for each file from node 0 at offset 0. When there are many small files, the lower nodes fill up disproportionately. This patch spreads out node usage by starting files at nodes other than 0, by using the inode number to bias the starting node for interleave. Signed-off-by: Nathan Zimmer <nzimmer@sgi.com> Signed-off-by: Hugh Dickins <hughd@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Nick Piggin <npiggin@gmail.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Rik van Riel <riel@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | mm: remove redundant initializationMinchan Kim2012-07-312-12/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_data_t is zeroed before reaching free_area_init_core(), so remove the now unnecessary initializations. Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
| * | | | mm: warn if pg_data_t isn't initialized with zeroMinchan Kim2012-07-311-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Warn if memory-hotplug/boot code doesn't initialize pg_data_t with zero when it is allocated. Arch code and memory hotplug already initiailize pg_data_t. So this warning should never happen. I select fields randomly near the beginning, middle and end of pg_data_t for checking. This patch isn't for performance but for removing initialization code which is necessary to add whenever we adds new field to pg_data_t or zone. Firstly, Andrew suggested clearing out of pg_data_t in MM core part but Tejun doesn't like it because in the future, some archs can initialize some fields in arch code and pass them into general MM part so blindly clearing it out in mm core part would be very annoying. Signed-off-by: Minchan Kim <minchan@kernel.org> Cc: Tejun Heo <tj@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>