From 616db8779b1e3f93075df691432cccc5ef3c3ba0 Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Wed, 17 May 2023 17:02:08 -1000 Subject: workqueue: Automatically mark CPU-hogging work items CPU_INTENSIVE If a per-cpu work item hogs the CPU, it can prevent other work items from starting through concurrency management. A per-cpu workqueue which intends to host such CPU-hogging work items can choose to not participate in concurrency management by setting %WQ_CPU_INTENSIVE; however, this can be error-prone and difficult to debug when missed. This patch adds an automatic CPU usage based detection. If a concurrency-managed work item consumes more CPU time than the threshold (10ms by default) continuously without intervening sleeps, wq_worker_tick() which is called from scheduler_tick() will detect the condition and automatically mark it CPU_INTENSIVE. The mechanism isn't foolproof: * Detection depends on tick hitting the work item. Getting preempted at the right timings may allow a violating work item to evade detection at least temporarily. * nohz_full CPUs may not be running ticks and thus can fail detection. * Even when detection is working, the 10ms detection delays can add up if many CPU-hogging work items are queued at the same time. However, in vast majority of cases, this should be able to detect violations reliably and provide reasonable protection with a small increase in code complexity. If some work items trigger this condition repeatedly, the bigger problem likely is the CPU being saturated with such per-cpu work items and the solution would be making them UNBOUND. The next patch will add a debug mechanism to help spot such cases. v4: Documentation for workqueue.cpu_intensive_thresh_us added to kernel-parameters.txt. v3: Switch to use wq_worker_tick() instead of hooking into preemptions as suggested by Peter. v2: Lai pointed out that wq_worker_stopping() also needs to be called from preemption and rtlock paths and an earlier patch was updated accordingly. This patch adds a comment describing the risk of infinte recursions and how they're avoided. Signed-off-by: Tejun Heo Acked-by: Peter Zijlstra Cc: Linus Torvalds Cc: Lai Jiangshan --- Documentation/core-api/workqueue.rst | 38 ++++++++++++++++++------------------ 1 file changed, 19 insertions(+), 19 deletions(-) (limited to 'Documentation/core-api/workqueue.rst') diff --git a/Documentation/core-api/workqueue.rst b/Documentation/core-api/workqueue.rst index 7e5c39310bbf..a389f31b025c 100644 --- a/Documentation/core-api/workqueue.rst +++ b/Documentation/core-api/workqueue.rst @@ -354,25 +354,25 @@ Monitoring Use tools/workqueue/wq_monitor.py to monitor workqueue operations: :: $ tools/workqueue/wq_monitor.py events - total infl CMwake mayday rescued - events 18545 0 5 - - - events_highpri 8 0 0 - - - events_long 3 0 0 - - - events_unbound 38306 0 - - - - events_freezable 0 0 0 - - - events_power_efficient 29598 0 0 - - - events_freezable_power_ 10 0 0 - - - sock_diag_events 0 0 0 - - - - total infl CMwake mayday rescued - events 18548 0 5 - - - events_highpri 8 0 0 - - - events_long 3 0 0 - - - events_unbound 38322 0 - - - - events_freezable 0 0 0 - - - events_power_efficient 29603 0 0 - - - events_freezable_power_ 10 0 0 - - - sock_diag_events 0 0 0 - - + total infl CPUitsv CMwake mayday rescued + events 18545 0 0 5 - - + events_highpri 8 0 0 0 - - + events_long 3 0 0 0 - - + events_unbound 38306 0 - - - - + events_freezable 0 0 0 0 - - + events_power_efficient 29598 0 0 0 - - + events_freezable_power_ 10 0 0 0 - - + sock_diag_events 0 0 0 0 - - + + total infl CPUitsv CMwake mayday rescued + events 18548 0 0 5 - - + events_highpri 8 0 0 0 - - + events_long 3 0 0 0 - - + events_unbound 38322 0 - - - - + events_freezable 0 0 0 0 - - + events_power_efficient 29603 0 0 0 - - + events_freezable_power_ 10 0 0 0 - - + sock_diag_events 0 0 0 0 - - ... -- cgit v1.2.3