summaryrefslogtreecommitdiffstats
path: root/kernel/sched_debug.c
diff options
context:
space:
mode:
authorKen Chen <kenchen@google.com>2007-10-18 21:32:56 +0200
committerIngo Molnar <mingo@elte.hu>2007-10-18 21:32:56 +0200
commit480b9434c542ddf2833aaed3dabba71bc0b787b5 (patch)
tree78c2638ac583cc57165ee1393ebbbbbe367f46fb /kernel/sched_debug.c
parentcc4ea79588e688ea9b1161650979a194dd709169 (diff)
downloadlinux-stable-480b9434c542ddf2833aaed3dabba71bc0b787b5.tar.gz
linux-stable-480b9434c542ddf2833aaed3dabba71bc0b787b5.tar.bz2
linux-stable-480b9434c542ddf2833aaed3dabba71bc0b787b5.zip
sched: reduce schedstat variable overhead a bit
schedstat is useful in investigating CPU scheduler behavior. Ideally, I think it is beneficial to have it on all the time. However, the cost of turning it on in production system is quite high, largely due to number of events it collects and also due to its large memory footprint. Most of the fields probably don't need to be full 64-bit on 64-bit arch. Rolling over 4 billion events will most like take a long time and user space tool can be made to accommodate that. I'm proposing kernel to cut back most of variable width on 64-bit system. (note, the following patch doesn't affect 32-bit system). Signed-off-by: Ken Chen <kenchen@google.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_debug.c')
-rw-r--r--kernel/sched_debug.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched_debug.c b/kernel/sched_debug.c
index a5e517ec07c3..e6fb392e5164 100644
--- a/kernel/sched_debug.c
+++ b/kernel/sched_debug.c
@@ -137,7 +137,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
SEQ_printf(m, " .%-30s: %ld\n", "nr_running", cfs_rq->nr_running);
SEQ_printf(m, " .%-30s: %ld\n", "load", cfs_rq->load.weight);
#ifdef CONFIG_SCHEDSTATS
- SEQ_printf(m, " .%-30s: %ld\n", "bkl_count",
+ SEQ_printf(m, " .%-30s: %d\n", "bkl_count",
rq->bkl_count);
#endif
SEQ_printf(m, " .%-30s: %ld\n", "nr_spread_over",