diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2021-07-29 13:01:59 +0200 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2021-08-28 01:33:02 +0200 |
commit | b542e383d8c005f06a131e2b40d5889b812f19c6 (patch) | |
tree | 14a6c55f5366f44df978f27c4b8a92f0a30b0689 /fs/aio.c | |
parent | 366e7ad6ba5f4cb2ffd0b7316e404d6ee9c0f401 (diff) | |
download | linux-stable-b542e383d8c005f06a131e2b40d5889b812f19c6.tar.gz linux-stable-b542e383d8c005f06a131e2b40d5889b812f19c6.tar.bz2 linux-stable-b542e383d8c005f06a131e2b40d5889b812f19c6.zip |
eventfd: Make signal recursion protection a task bit
The recursion protection for eventfd_signal() is based on a per CPU
variable and relies on the !RT semantics of spin_lock_irqsave() for
protecting this per CPU variable. On RT kernels spin_lock_irqsave() neither
disables preemption nor interrupts which allows the spin lock held section
to be preempted. If the preempting task invokes eventfd_signal() as well,
then the recursion warning triggers.
Paolo suggested to protect the per CPU variable with a local lock, but
that's heavyweight and actually not necessary. The goal of this protection
is to prevent the task stack from overflowing, which can be achieved with a
per task recursion protection as well.
Replace the per CPU variable with a per task bit similar to other recursion
protection bits like task_struct::in_page_owner. This works on both !RT and
RT kernels and removes as a side effect the extra per CPU storage.
No functional change for !RT kernels.
Reported-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Link: https://lore.kernel.org/r/87wnp9idso.ffs@tglx
Diffstat (limited to 'fs/aio.c')
-rw-r--r-- | fs/aio.c | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -1695,7 +1695,7 @@ static int aio_poll_wake(struct wait_queue_entry *wait, unsigned mode, int sync, list_del(&iocb->ki_list); iocb->ki_res.res = mangle_poll(mask); req->done = true; - if (iocb->ki_eventfd && eventfd_signal_count()) { + if (iocb->ki_eventfd && eventfd_signal_allowed()) { iocb = NULL; INIT_WORK(&req->work, aio_poll_put_work); schedule_work(&req->work); |