diff options
author | Tejun Heo <tj@kernel.org> | 2016-05-16 09:33:01 +0300 |
---|---|---|
committer | Sasha Levin <sasha.levin@oracle.com> | 2016-05-16 09:29:12 -0400 |
commit | 39ad49c725b892cc54878da2ddd7a7b418d21893 (patch) | |
tree | c8f4b6fd05c250b2d0ffaba983a527a5837c4acd /kernel | |
parent | 65b49b662365b54d3c372734a8591b404a43397f (diff) | |
download | linux-stable-39ad49c725b892cc54878da2ddd7a7b418d21893.tar.gz linux-stable-39ad49c725b892cc54878da2ddd7a7b418d21893.tar.bz2 linux-stable-39ad49c725b892cc54878da2ddd7a7b418d21893.zip |
timers: Use proper base migration in add_timer_on()
[ Upstream commit 22b886dd1018093920c4250dee2a9a3cb7cff7b8 ]
Regardless of the previous CPU a timer was on, add_timer_on()
currently simply sets timer->flags to the new CPU. As the caller must
be seeing the timer as idle, this is locally fine, but the timer
leaving the old base while unlocked can lead to race conditions as
follows.
Let's say timer was on cpu 0.
cpu 0 cpu 1
-----------------------------------------------------------------------------
del_timer(timer) succeeds
del_timer(timer)
lock_timer_base(timer) locks cpu_0_base
add_timer_on(timer, 1)
spin_lock(&cpu_1_base->lock)
timer->flags set to cpu_1_base
operates on @timer operates on @timer
This triggered with mod_delayed_work_on() which contains
"if (del_timer()) add_timer_on()" sequence eventually leading to the
following oops.
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<ffffffff810ca6e9>] detach_if_pending+0x69/0x1a0
...
Workqueue: wqthrash wqthrash_workfunc [wqthrash]
task: ffff8800172ca680 ti: ffff8800172d0000 task.ti: ffff8800172d0000
RIP: 0010:[<ffffffff810ca6e9>] [<ffffffff810ca6e9>] detach_if_pending+0x69/0x1a0
...
Call Trace:
[<ffffffff810cb0b4>] del_timer+0x44/0x60
[<ffffffff8106e836>] try_to_grab_pending+0xb6/0x160
[<ffffffff8106e913>] mod_delayed_work_on+0x33/0x80
[<ffffffffa0000081>] wqthrash_workfunc+0x61/0x90 [wqthrash]
[<ffffffff8106dba8>] process_one_work+0x1e8/0x650
[<ffffffff8106e05e>] worker_thread+0x4e/0x450
[<ffffffff810746af>] kthread+0xef/0x110
[<ffffffff8185980f>] ret_from_fork+0x3f/0x70
Fix it by updating add_timer_on() to perform proper migration as
__mod_timer() does.
Reported-and-tested-by: Jeff Layton <jlayton@poochiereds.net>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Chris Worley <chris.worley@primarydata.com>
Cc: bfields@fieldses.org
Cc: Michael Skralivetsky <michael.skralivetsky@primarydata.com>
Cc: Trond Myklebust <trond.myklebust@primarydata.com>
Cc: Shaohua Li <shli@fb.com>
Cc: Jeff Layton <jlayton@poochiereds.net>
Cc: kernel-team@fb.com
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/20151029103113.2f893924@tlielax.poochiereds.net
Link: http://lkml.kernel.org/r/20151104171533.GI5749@mtj.duckdns.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> ( backport for 3.18 )
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/time/timer.c | 20 |
1 files changed, 17 insertions, 3 deletions
diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 3260ffdb368f..3c4e3116cdb1 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -956,13 +956,27 @@ EXPORT_SYMBOL(add_timer); */ void add_timer_on(struct timer_list *timer, int cpu) { - struct tvec_base *base = per_cpu(tvec_bases, cpu); + struct tvec_base *new_base = per_cpu(tvec_bases, cpu); + struct tvec_base *base; unsigned long flags; timer_stats_timer_set_start_info(timer); BUG_ON(timer_pending(timer) || !timer->function); - spin_lock_irqsave(&base->lock, flags); - timer_set_base(timer, base); + + /* + * If @timer was on a different CPU, it should be migrated with the + * old base locked to prevent other operations proceeding with the + * wrong base locked. See lock_timer_base(). + */ + base = lock_timer_base(timer, &flags); + if (base != new_base) { + timer_set_base(timer, NULL); + spin_unlock(&base->lock); + base = new_base; + spin_lock(&base->lock); + timer_set_base(timer, base); + } + debug_activate(timer, timer->expires); internal_add_timer(base, timer); spin_unlock_irqrestore(&base->lock, flags); |