diff options
author | Tejun Heo <tj@kernel.org> | 2011-10-19 14:31:25 +0200 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2011-10-19 14:31:25 +0200 |
commit | 315fceee81155ef2aeed9316ca72aeea9347db5c (patch) | |
tree | 91ac02284b6737e6b65e855da771f52dbb3ad32d /block/blk-throttle.c | |
parent | 75eb6c372d41d6d140b893873f6687d78c987a44 (diff) | |
download | linux-stable-315fceee81155ef2aeed9316ca72aeea9347db5c.tar.gz linux-stable-315fceee81155ef2aeed9316ca72aeea9347db5c.tar.bz2 linux-stable-315fceee81155ef2aeed9316ca72aeea9347db5c.zip |
block: drop unnecessary blk_get/put_queue() in scsi_cmd_ioctl() and blk_get_tg()
blk_get/put_queue() in scsi_cmd_ioctl() and throtl_get_tg() are
completely bogus. The caller must have a reference to the queue on
entry and taking an extra reference doesn't change anything.
For scsi_cmd_ioctl(), the only effect is that it ends up checking
QUEUE_FLAG_DEAD on entry; however, this is bogus as queue can die
right after blk_get_queue(). Dead queue should be and is handled in
request issue path (it's somewhat broken now but that's a separate
problem and doesn't affect this one much).
throtl_get_tg() incorrectly assumes that q is rcu freed. Also, it
doesn't check return value of blk_get_queue(). If the queue is
already dead, it ends up doing an extra put.
Drop them.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-throttle.c')
-rw-r--r-- | block/blk-throttle.c | 8 |
1 files changed, 1 insertions, 7 deletions
diff --git a/block/blk-throttle.c b/block/blk-throttle.c index f3f495ea4eeb..ecba5fcef201 100644 --- a/block/blk-throttle.c +++ b/block/blk-throttle.c @@ -324,12 +324,8 @@ static struct throtl_grp * throtl_get_tg(struct throtl_data *td) /* * Need to allocate a group. Allocation of group also needs allocation * of per cpu stats which in-turn takes a mutex() and can block. Hence - * we need to drop rcu lock and queue_lock before we call alloc - * - * Take the request queue reference to make sure queue does not - * go away once we return from allocation. + * we need to drop rcu lock and queue_lock before we call alloc. */ - blk_get_queue(q); rcu_read_unlock(); spin_unlock_irq(q->queue_lock); @@ -339,13 +335,11 @@ static struct throtl_grp * throtl_get_tg(struct throtl_data *td) * dead */ if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) { - blk_put_queue(q); if (tg) kfree(tg); return ERR_PTR(-ENODEV); } - blk_put_queue(q); /* Group allocated and queue is still alive. take the lock */ spin_lock_irq(q->queue_lock); |