diff options
author | Ming Lei <tom.leiming@gmail.com> | 2014-01-14 17:56:42 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-01-15 14:19:42 +0700 |
commit | 74e72f894d56eb9d2e1218530c658e7d297e002b (patch) | |
tree | b8fe55badb0f3edc43a68576823ffad76f5e5770 /lib/percpu_counter.c | |
parent | 5a610fcc7390ee60308deaf09426ada87a1eeec2 (diff) | |
download | linux-stable-74e72f894d56eb9d2e1218530c658e7d297e002b.tar.gz linux-stable-74e72f894d56eb9d2e1218530c658e7d297e002b.tar.bz2 linux-stable-74e72f894d56eb9d2e1218530c658e7d297e002b.zip |
lib/percpu_counter.c: fix __percpu_counter_add()
__percpu_counter_add() may be called in softirq/hardirq handler (such
as, blk_mq_queue_exit() is typically called in hardirq/softirq handler),
so we need to call this_cpu_add()(irq safe helper) to update percpu
counter, otherwise counts may be lost.
This fixes the problem that 'rmmod null_blk' hangs in blk_cleanup_queue()
because of miscounting of request_queue->mq_usage_counter.
This patch is the v1 of previous one of "lib/percpu_counter.c:
disable local irq when updating percpu couter", and takes Andrew's
approach which may be more efficient for ARCHs(x86, s390) that
have optimized this_cpu_add().
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Shaohua Li <shli@fusionio.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Fan Du <fan.du@windriver.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'lib/percpu_counter.c')
-rw-r--r-- | lib/percpu_counter.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c index 7473ee3b4ee7..1da85bb1bc07 100644 --- a/lib/percpu_counter.c +++ b/lib/percpu_counter.c @@ -82,10 +82,10 @@ void __percpu_counter_add(struct percpu_counter *fbc, s64 amount, s32 batch) unsigned long flags; raw_spin_lock_irqsave(&fbc->lock, flags); fbc->count += count; + __this_cpu_sub(*fbc->counters, count); raw_spin_unlock_irqrestore(&fbc->lock, flags); - __this_cpu_write(*fbc->counters, 0); } else { - __this_cpu_write(*fbc->counters, count); + this_cpu_add(*fbc->counters, amount); } preempt_enable(); } |