diff options
author | Shaibal Dutta <shaibal.dutta@broadcom.com> | 2014-01-31 11:53:06 -0800 |
---|---|---|
committer | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2014-02-17 15:02:14 -0800 |
commit | ae1670339c95c3ff96ab10582506cf827c5fecc8 (patch) | |
tree | 65f8d452bd742cf8942abdfb1a427ab20310dd81 /kernel/rcu/srcu.c | |
parent | 52e2bb958ac4f9b3c4bdd78606d279852fd72922 (diff) | |
download | linux-stable-ae1670339c95c3ff96ab10582506cf827c5fecc8.tar.gz linux-stable-ae1670339c95c3ff96ab10582506cf827c5fecc8.tar.bz2 linux-stable-ae1670339c95c3ff96ab10582506cf827c5fecc8.zip |
rcu: Move SRCU grace period work to power efficient workqueue
For better use of CPU idle time, allow the scheduler to select the CPU
on which the SRCU grace period work would be scheduled. This improves
idle residency time and conserves power.
This functionality is enabled when CONFIG_WQ_POWER_EFFICIENT is selected.
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Dipankar Sarma <dipankar@in.ibm.com>
Signed-off-by: Shaibal Dutta <shaibal.dutta@broadcom.com>
[zoran.markovic@linaro.org: Rebased to latest kernel version. Added commit
message. Fixed code alignment.]
Signed-off-by: Zoran Markovic <zoran.markovic@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Diffstat (limited to 'kernel/rcu/srcu.c')
-rw-r--r-- | kernel/rcu/srcu.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/kernel/rcu/srcu.c b/kernel/rcu/srcu.c index 5db7e9272d37..2359779e1daa 100644 --- a/kernel/rcu/srcu.c +++ b/kernel/rcu/srcu.c @@ -398,7 +398,7 @@ void call_srcu(struct srcu_struct *sp, struct rcu_head *head, rcu_batch_queue(&sp->batch_queue, head); if (!sp->running) { sp->running = true; - schedule_delayed_work(&sp->work, 0); + queue_delayed_work(system_power_efficient_wq, &sp->work, 0); } spin_unlock_irqrestore(&sp->queue_lock, flags); } @@ -674,7 +674,8 @@ static void srcu_reschedule(struct srcu_struct *sp) } if (pending) - schedule_delayed_work(&sp->work, SRCU_INTERVAL); + queue_delayed_work(system_power_efficient_wq, + &sp->work, SRCU_INTERVAL); } /* |