summaryrefslogtreecommitdiffstats
path: root/include/linux/rcutiny.h
diff options
context:
space:
mode:
authorThomas Gleixner <tglx@linutronix.de>2020-05-03 15:08:52 +0200
committerThomas Gleixner <tglx@linutronix.de>2020-05-19 15:51:21 +0200
commit8ae0ae6737ad449c8ae21e2bb01d9736f360a933 (patch)
tree1143878ecb15a2eeb6790b837da31d14a96c0776 /include/linux/rcutiny.h
parent9ea366f669ded353ae49754216c042e7d2f72ba6 (diff)
downloadlinux-8ae0ae6737ad449c8ae21e2bb01d9736f360a933.tar.gz
linux-8ae0ae6737ad449c8ae21e2bb01d9736f360a933.tar.bz2
linux-8ae0ae6737ad449c8ae21e2bb01d9736f360a933.zip
rcu: Provide rcu_irq_exit_preempt()
Interrupts and exceptions invoke rcu_irq_enter() on entry and need to invoke rcu_irq_exit() before they either return to the interrupted code or invoke the scheduler due to preemption. The general assumption is that RCU idle code has to have preemption disabled so that a return from interrupt cannot schedule. So the return from interrupt code invokes rcu_irq_exit() and preempt_schedule_irq(). If there is any imbalance in the rcu_irq/nmi* invocations or RCU idle code had preemption enabled then this goes unnoticed until the CPU goes idle or some other RCU check is executed. Provide rcu_irq_exit_preempt() which can be invoked from the interrupt/exception return code in case that preemption is enabled. It invokes rcu_irq_exit() and contains a few sanity checks in case that CONFIG_PROVE_RCU is enabled to catch such issues directly. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200505134904.364456424@linutronix.de
Diffstat (limited to 'include/linux/rcutiny.h')
-rw-r--r--include/linux/rcutiny.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/rcutiny.h b/include/linux/rcutiny.h
index 3465ba704a11..980eb78751d9 100644
--- a/include/linux/rcutiny.h
+++ b/include/linux/rcutiny.h
@@ -71,6 +71,7 @@ static inline void rcu_irq_enter(void) { }
static inline void rcu_irq_exit_irqson(void) { }
static inline void rcu_irq_enter_irqson(void) { }
static inline void rcu_irq_exit(void) { }
+static inline void rcu_irq_exit_preempt(void) { }
static inline void exit_rcu(void) { }
static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t)
{