summaryrefslogtreecommitdiffstats
path: root/kernel/sched.c
diff options
context:
space:
mode:
authorPaul Turner <pjt@google.com>2011-07-21 09:43:32 -0700
committerIngo Molnar <mingo@elte.hu>2011-08-14 12:03:31 +0200
commita9cf55b2861057a213e610da2fec52125439a11d (patch)
tree6c0caf35a6e8fbba7325227f11029f5f4d4cbf7e /kernel/sched.c
parent58088ad0152ba4b7997388c93d0ca208ec1ece75 (diff)
downloadlinux-stable-a9cf55b2861057a213e610da2fec52125439a11d.tar.gz
linux-stable-a9cf55b2861057a213e610da2fec52125439a11d.tar.bz2
linux-stable-a9cf55b2861057a213e610da2fec52125439a11d.zip
sched: Expire invalid runtime
Since quota is managed using a global state but consumed on a per-cpu basis we need to ensure that our per-cpu state is appropriately synchronized. Most importantly, runtime that is state (from a previous period) should not be locally consumable. We take advantage of existing sched_clock synchronization about the jiffy to efficiently detect whether we have (globally) crossed a quota boundary above. One catch is that the direction of spread on sched_clock is undefined, specifically, we don't know whether our local clock is behind or ahead of the one responsible for the current expiration time. Fortunately we can differentiate these by considering whether the global deadline has advanced. If it has not, then we assume our clock to be "fast" and advance our local expiration; otherwise, we know the deadline has truly passed and we expire our local runtime. Signed-off-by: Paul Turner <pjt@google.com> Reviewed-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20110721184757.379275352@google.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched.c')
-rw-r--r--kernel/sched.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c
index 34bf8e6db9af..a2d55144bd9c 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -256,6 +256,7 @@ struct cfs_bandwidth {
ktime_t period;
u64 quota, runtime;
s64 hierarchal_quota;
+ u64 runtime_expires;
int idle, timer_active;
struct hrtimer period_timer;
@@ -396,6 +397,7 @@ struct cfs_rq {
#endif
#ifdef CONFIG_CFS_BANDWIDTH
int runtime_enabled;
+ u64 runtime_expires;
s64 runtime_remaining;
#endif
#endif
@@ -9166,8 +9168,8 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota)
raw_spin_lock_irq(&cfs_b->lock);
cfs_b->period = ns_to_ktime(period);
cfs_b->quota = quota;
- cfs_b->runtime = quota;
+ __refill_cfs_bandwidth_runtime(cfs_b);
/* restart the period timer (if active) to handle new period expiry */
if (runtime_enabled && cfs_b->timer_active) {
/* force a reprogram */