diff options
author | Ingo Molnar <mingo@elte.hu> | 2008-04-23 09:24:06 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-05-05 23:56:18 +0200 |
commit | dfbf4a1bc319f0f9a31e39b2da1fa5c55e85af89 (patch) | |
tree | 0b9dd19406c53a93452dd345bb05f76aa712a757 /arch/x86/kernel | |
parent | cb4ad1ffc7c0d8ea7dc8cd8ba303d83551716d46 (diff) | |
download | linux-dfbf4a1bc319f0f9a31e39b2da1fa5c55e85af89.tar.gz linux-dfbf4a1bc319f0f9a31e39b2da1fa5c55e85af89.tar.bz2 linux-dfbf4a1bc319f0f9a31e39b2da1fa5c55e85af89.zip |
sched: fix cpu clock
David Miller pointed it out that nothing in cpu_clock() sets
prev_cpu_time. This caused __sync_cpu_clock() to be called
all the time - against the intention of this code.
The result was that in practice we hit a global spinlock every
time cpu_clock() is called - which - even though cpu_clock()
is used for tracing and debugging, is suboptimal.
While at it, also:
- move the irq disabling to the outest layer,
this should make cpu_clock() warp-free when called with irqs
enabled.
- use long long instead of cycles_t - for platforms where cycles_t
is 32-bit.
Reported-by: David Miller <davem@davemloft.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch/x86/kernel')
0 files changed, 0 insertions, 0 deletions