diff options
author | Oleg Nesterov <oleg@redhat.com> | 2020-05-19 19:25:06 +0200 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2020-06-15 14:10:00 +0200 |
commit | 3dc167ba5729ddd2d8e3fa1841653792c295d3f1 (patch) | |
tree | d3348dfe2edc313740bfd0b348d91d36726f9cc1 /include/linux/math64.h | |
parent | b3a9e3b9622ae10064826dccb4f7a52bd88c7407 (diff) | |
download | linux-3dc167ba5729ddd2d8e3fa1841653792c295d3f1.tar.gz linux-3dc167ba5729ddd2d8e3fa1841653792c295d3f1.tar.bz2 linux-3dc167ba5729ddd2d8e3fa1841653792c295d3f1.zip |
sched/cputime: Improve cputime_adjust()
People report that utime and stime from /proc/<pid>/stat become very
wrong when the numbers are big enough, especially if you watch these
counters incrementally.
Specifically, the current implementation of: stime*rtime/total,
results in a saw-tooth function on top of the desired line, where the
teeth grow in size the larger the values become. IOW, it has a
relative error.
The result is that, when watching incrementally as time progresses
(for large values), we'll see periods of pure stime or utime increase,
irrespective of the actual ratio we're striving for.
Replace scale_stime() with a math64.h helper: mul_u64_u64_div_u64()
that is far more accurate. This also allows architectures to override
the implementation -- for instance they can opt for the old algorithm
if this new one turns out to be too expensive for them.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200519172506.GA317395@hirez.programming.kicks-ass.net
Diffstat (limited to 'include/linux/math64.h')
-rw-r--r-- | include/linux/math64.h | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/include/linux/math64.h b/include/linux/math64.h index 11a267413e8e..d097119419e6 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -263,6 +263,8 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor) } #endif /* mul_u64_u32_div */ +u64 mul_u64_u64_div_u64(u64 a, u64 mul, u64 div); + #define DIV64_U64_ROUND_UP(ll, d) \ ({ u64 _tmp = (d); div64_u64((ll) + _tmp - 1, _tmp); }) |