diff options
author | Tejun Heo <tj@kernel.org> | 2012-03-05 13:15:22 -0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2012-03-06 21:27:24 +0100 |
commit | c875f4d0250a1f070fa26087a73bdd8f54c48100 (patch) | |
tree | 4ed2bae2fc48e54ac712d28eaaae8217c8064c1d /block/cfq-iosched.c | |
parent | 9f13ef678efd977487fc0c2e489f17c9a8c67a3e (diff) | |
download | linux-stable-c875f4d0250a1f070fa26087a73bdd8f54c48100.tar.gz linux-stable-c875f4d0250a1f070fa26087a73bdd8f54c48100.tar.bz2 linux-stable-c875f4d0250a1f070fa26087a73bdd8f54c48100.zip |
blkcg: drop unnecessary RCU locking
Now that blkg additions / removals are always done under both q and
blkcg locks, the only places RCU locking is necessary are
blkg_lookup[_create]() for lookup w/o blkcg lock. This patch drops
unncessary RCU locking replacing it with plain blkcg locking as
necessary.
* blkiocg_pre_destroy() already perform proper locking and don't need
RCU. Dropped.
* blkio_read_blkg_stats() now uses blkcg->lock instead of RCU read
lock. This isn't a hot path.
* Now unnecessary synchronize_rcu() from queue exit paths removed.
This makes q->nr_blkgs unnecessary. Dropped.
* RCU annotation on blkg->q removed.
-v2: Vivek pointed out that blkg_lookup_create() still needs to be
called under rcu_read_lock(). Updated.
-v3: After the update, stats_lock locking in blkio_read_blkg_stats()
shouldn't be using _irq variant as it otherwise ends up enabling
irq while blkcg->lock is locked. Fixed.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/cfq-iosched.c')
-rw-r--r-- | block/cfq-iosched.c | 24 |
1 files changed, 0 insertions, 24 deletions
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c index 393eaa59913b..9e386d9bcb79 100644 --- a/block/cfq-iosched.c +++ b/block/cfq-iosched.c @@ -3449,7 +3449,6 @@ static void cfq_exit_queue(struct elevator_queue *e) { struct cfq_data *cfqd = e->elevator_data; struct request_queue *q = cfqd->queue; - bool wait = false; cfq_shutdown_timer_wq(cfqd); @@ -3462,31 +3461,8 @@ static void cfq_exit_queue(struct elevator_queue *e) spin_unlock_irq(q->queue_lock); -#ifdef CONFIG_BLK_CGROUP - /* - * If there are groups which we could not unlink from blkcg list, - * wait for a rcu period for them to be freed. - */ - spin_lock_irq(q->queue_lock); - wait = q->nr_blkgs; - spin_unlock_irq(q->queue_lock); -#endif cfq_shutdown_timer_wq(cfqd); - /* - * Wait for cfqg->blkg->key accessors to exit their grace periods. - * Do this wait only if there are other unlinked groups out - * there. This can happen if cgroup deletion path claimed the - * responsibility of cleaning up a group before queue cleanup code - * get to the group. - * - * Do not call synchronize_rcu() unconditionally as there are drivers - * which create/delete request queue hundreds of times during scan/boot - * and synchronize_rcu() can take significant time and slow down boot. - */ - if (wait) - synchronize_rcu(); - #ifndef CONFIG_CFQ_GROUP_IOSCHED kfree(cfqd->root_group); #endif |