diff options
author | Andi Kleen <ak@linux.intel.com> | 2018-02-28 13:43:28 -0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2022-06-25 11:46:25 +0200 |
commit | d3146b90e3661b38dc24e64e9deea550ecc31b0c (patch) | |
tree | 41339fccaf83d2bea3277f347c5d4856b8bb549d | |
parent | d218c093c864432fc6038682868f3d3c7c69e0cd (diff) | |
download | linux-stable-d3146b90e3661b38dc24e64e9deea550ecc31b0c.tar.gz linux-stable-d3146b90e3661b38dc24e64e9deea550ecc31b0c.tar.bz2 linux-stable-d3146b90e3661b38dc24e64e9deea550ecc31b0c.zip |
random: optimize add_interrupt_randomness
commit e8e8a2e47db6bb85bb0cb21e77b5c6aaedf864b4 upstream.
add_interrupt_randomess always wakes up
code blocking on /dev/random. This wake up is done
unconditionally. Unfortunately this means all interrupts
take the wait queue spinlock, which can be rather expensive
on large systems processing lots of interrupts.
We saw 1% cpu time spinning on this on a large macro workload
running on a large system.
I believe it's a recent regression (?)
Always check if there is a waiter on the wait queue
before waking up. This check can be done without
taking a spinlock.
1.06% 10460 [kernel.vmlinux] [k] native_queued_spin_lock_slowpath
|
---native_queued_spin_lock_slowpath
|
--0.57%--_raw_spin_lock_irqsave
|
--0.56%--__wake_up_common_lock
credit_entropy_bits
add_interrupt_randomness
handle_irq_event_percpu
handle_irq_event
handle_edge_irq
handle_irq
do_IRQ
common_interrupt
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | drivers/char/random.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/drivers/char/random.c b/drivers/char/random.c index e60ec81b592b..535f10c6bdcd 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -721,7 +721,8 @@ retry: } /* should we wake readers? */ - if (entropy_bits >= random_read_wakeup_bits) { + if (entropy_bits >= random_read_wakeup_bits && + wq_has_sleeper(&random_read_wait)) { wake_up_interruptible(&random_read_wait); kill_fasync(&fasync, SIGIO, POLL_IN); } |