diff options
author | Wanpeng Li <wanpeng.li@hotmail.com> | 2016-08-11 13:36:35 +0800 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2016-08-11 11:02:14 +0200 |
commit | f9bcf1e0e0145323ba2cf72ecad5264ff3883eb1 (patch) | |
tree | a91de94c46cb85a38fb8198ee2c65e37e4bb8347 /kernel/sched | |
parent | c0c8c9fa210c9a042060435f17e40ba4a76d6d6f (diff) | |
download | linux-f9bcf1e0e0145323ba2cf72ecad5264ff3883eb1.tar.gz linux-f9bcf1e0e0145323ba2cf72ecad5264ff3883eb1.tar.bz2 linux-f9bcf1e0e0145323ba2cf72ecad5264ff3883eb1.zip |
sched/cputime: Fix steal time accounting
Commit:
57430218317 ("sched/cputime: Count actually elapsed irq & softirq time")
... didn't take steal time into consideration with passing the noirqtime
kernel parameter.
As Paolo pointed out before:
| Why not? If idle=poll, for example, any time the guest is suspended (and
| thus cannot poll) does count as stolen time.
This patch fixes it by reducing steal time from idle time accounting when
the noirqtime parameter is true. The average idle time drops from 56.8%
to 54.75% for nohz idle kvm guest(noirqtime, idle=poll, four vCPUs running
on one pCPU).
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Radim <rkrcmar@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1470893795-3527-1-git-send-email-wanpeng.li@hotmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r-- | kernel/sched/cputime.c | 11 |
1 files changed, 9 insertions, 2 deletions
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index 1934f658c036..8b9bcc5a58fa 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -508,13 +508,20 @@ void account_process_tick(struct task_struct *p, int user_tick) */ void account_idle_ticks(unsigned long ticks) { - + cputime_t cputime, steal; if (sched_clock_irqtime) { irqtime_account_idle_ticks(ticks); return; } - account_idle_time(jiffies_to_cputime(ticks)); + cputime = cputime_one_jiffy; + steal = steal_account_process_time(cputime); + + if (steal >= cputime) + return; + + cputime -= steal; + account_idle_time(cputime); } /* |