summaryrefslogtreecommitdiffstats
path: root/kernel/workqueue.c
diff options
context:
space:
mode:
authorLai Jiangshan <laijs@linux.alibaba.com>2021-01-11 23:26:33 +0800
committerPeter Zijlstra <peterz@infradead.org>2021-01-22 15:09:41 +0100
commit547a77d02f8cfb345631ce23b5b548d27afa0fc4 (patch)
tree09f6cd90cb4d818ffa6df2feb13bb955288c11a1 /kernel/workqueue.c
parent36c6e17bf16922935a5a0dd073d5b032d34aa73d (diff)
downloadlinux-547a77d02f8cfb345631ce23b5b548d27afa0fc4.tar.gz
linux-547a77d02f8cfb345631ce23b5b548d27afa0fc4.tar.bz2
linux-547a77d02f8cfb345631ce23b5b548d27afa0fc4.zip
workqueue: Use cpu_possible_mask instead of cpu_active_mask to break affinity
The scheduler won't break affinity for us any more, and we should "emulate" the same behavior when the scheduler breaks affinity for us. The behavior is "changing the cpumask to cpu_possible_mask". And there might be some other CPUs online later while the worker is still running with the pending work items. The worker should be allowed to use the later online CPUs as before and process the work items ASAP. If we use cpu_active_mask here, we can't achieve this goal but using cpu_possible_mask can. Fixes: 06249738a41a ("workqueue: Manually break affinity on hotplug") Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Acked-by: Tejun Heo <tj@kernel.org> Tested-by: Paul E. McKenney <paulmck@kernel.org> Tested-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210111152638.2417-4-jiangshanlai@gmail.com
Diffstat (limited to 'kernel/workqueue.c')
-rw-r--r--kernel/workqueue.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 9880b6c0e272..1646331546eb 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4920,7 +4920,7 @@ static void unbind_workers(int cpu)
raw_spin_unlock_irq(&pool->lock);
for_each_pool_worker(worker, pool)
- WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_active_mask) < 0);
+ WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task, cpu_possible_mask) < 0);
mutex_unlock(&wq_pool_attach_mutex);