summaryrefslogtreecommitdiffstats
path: root/arch/x86
diff options
context:
space:
mode:
authorEric Dumazet <eric.dumazet@gmail.com>2009-07-03 00:08:26 +0200
committerIngo Molnar <mingo@elte.hu>2009-07-03 13:26:38 +0200
commitbbf2a330d92c5afccfd17592ba9ccd50f41cf748 (patch)
treeb7c794efa0de27875268f94c96bf69cf5053deb5 /arch/x86
parent029e5b1636d0511ef143af3a20c83c48e44c03f3 (diff)
downloadlinux-bbf2a330d92c5afccfd17592ba9ccd50f41cf748.tar.gz
linux-bbf2a330d92c5afccfd17592ba9ccd50f41cf748.tar.bz2
linux-bbf2a330d92c5afccfd17592ba9ccd50f41cf748.zip
x86: atomic64: The atomic64_t data type should be 8 bytes aligned on 32-bit too
Locked instructions on two cache lines at once are painful. If atomic64_t uses two cache lines, my test program is 10x slower. The chance for that is significant: 4/32 or 12.5%. Make sure an atomic64_t is 8 bytes aligned. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Mike Galbraith <efault@gmx.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: David Howells <dhowells@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> LKML-Reference: <alpine.LFD.2.01.0907021653030.3210@localhost.localdomain> [ changed it to __aligned(8) as per Andrew's suggestion ] Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/include/asm/atomic_32.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/include/asm/atomic_32.h b/arch/x86/include/asm/atomic_32.h
index 2503d4e64c2a..ae0fbb5b0578 100644
--- a/arch/x86/include/asm/atomic_32.h
+++ b/arch/x86/include/asm/atomic_32.h
@@ -250,7 +250,7 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
/* An 64bit atomic type */
typedef struct {
- unsigned long long counter;
+ unsigned long long __aligned(8) counter;
} atomic64_t;
#define ATOMIC64_INIT(val) { (val) }