summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2012-05-14 15:04:50 -0700
committerTejun Heo <tj@kernel.org>2012-05-14 15:04:50 -0700
commit544ecf310f0e7f51fa057ac2a295fc1b3b35a9d3 (patch)
tree417e606b3a1a7eaa31a3847a5101db37041e0c20 /kernel
parent0976dfc1d0cd80a4e9dfaf87bd8744612bde475a (diff)
downloadlinux-544ecf310f0e7f51fa057ac2a295fc1b3b35a9d3.tar.gz
linux-544ecf310f0e7f51fa057ac2a295fc1b3b35a9d3.tar.bz2
linux-544ecf310f0e7f51fa057ac2a295fc1b3b35a9d3.zip
workqueue: skip nr_running sanity check in worker_enter_idle() if trustee is active
worker_enter_idle() has WARN_ON_ONCE() which triggers if nr_running isn't zero when every worker is idle. This can trigger spuriously while a cpu is going down due to the way trustee sets %WORKER_ROGUE and zaps nr_running. It first sets %WORKER_ROGUE on all workers without updating nr_running, releases gcwq->lock, schedules, regrabs gcwq->lock and then zaps nr_running. If the last running worker enters idle inbetween, it would see stale nr_running which hasn't been zapped yet and trigger the WARN_ON_ONCE(). Fix it by performing the sanity check iff the trustee is idle. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: stable@vger.kernel.org
Diffstat (limited to 'kernel')
-rw-r--r--kernel/workqueue.c9
1 files changed, 7 insertions, 2 deletions
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 211eadb23323..c36c86cf7900 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1213,8 +1213,13 @@ static void worker_enter_idle(struct worker *worker)
} else
wake_up_all(&gcwq->trustee_wait);
- /* sanity check nr_running */
- WARN_ON_ONCE(gcwq->nr_workers == gcwq->nr_idle &&
+ /*
+ * Sanity check nr_running. Because trustee releases gcwq->lock
+ * between setting %WORKER_ROGUE and zapping nr_running, the
+ * warning may trigger spuriously. Check iff trustee is idle.
+ */
+ WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
+ gcwq->nr_workers == gcwq->nr_idle &&
atomic_read(get_gcwq_nr_running(gcwq->cpu)));
}