diff options
author | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2022-05-12 20:23:08 -0700 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2022-05-13 07:20:18 -0700 |
commit | 3f80492001aa64ac585016050ace8680611c2e20 (patch) | |
tree | f3b095abf054739c033724b6e0588fe8df5aa808 /mm | |
parent | fe573327ffb1deba802c91dd1d4ff41dafa97a0e (diff) | |
download | linux-3f80492001aa64ac585016050ace8680611c2e20.tar.gz linux-3f80492001aa64ac585016050ace8680611c2e20.tar.bz2 linux-3f80492001aa64ac585016050ace8680611c2e20.zip |
mm/vmalloc: use raw_cpu_ptr() for vmap_block_queue access
The per-CPU resource vmap_block_queue is accessed via get_cpu_var(). That
macro disables preemption and then loads the pointer from the current CPU.
This doesn't work on PREEMPT_RT because a spinlock_t is later accessed
within the preempt-disable section.
There is no need to disable preemption while accessing the per-CPU struct
vmap_block_queue because the list is protected with a spinlock_t. The
per-CPU struct is also accessed cross-CPU in purge_fragmented_blocks().
It is possible that by using raw_cpu_ptr() the code migrates to another
CPU and uses struct from another CPU. This is fine because the list is
locked and the locked section is very short.
Use raw_cpu_ptr() to access vmap_block_queue.
Link: https://lkml.kernel.org/r/YnKx3duAB53P7ojN@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/vmalloc.c | 6 |
1 files changed, 2 insertions, 4 deletions
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 5b59ccafcd46..07db42455dd4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1938,11 +1938,10 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) return ERR_PTR(err); } - vbq = &get_cpu_var(vmap_block_queue); + vbq = raw_cpu_ptr(&vmap_block_queue); spin_lock(&vbq->lock); list_add_tail_rcu(&vb->free_list, &vbq->free); spin_unlock(&vbq->lock); - put_cpu_var(vmap_block_queue); return vaddr; } @@ -2021,7 +2020,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) order = get_order(size); rcu_read_lock(); - vbq = &get_cpu_var(vmap_block_queue); + vbq = raw_cpu_ptr(&vmap_block_queue); list_for_each_entry_rcu(vb, &vbq->free, free_list) { unsigned long pages_off; @@ -2044,7 +2043,6 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) break; } - put_cpu_var(vmap_block_queue); rcu_read_unlock(); /* Allocate new block if nothing was found */ |