diff options
author | Tejun Heo <tj@kernel.org> | 2011-12-14 00:33:42 +0100 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2011-12-14 00:33:42 +0100 |
commit | 7e5a8794492e43e9eebb68a98a23be055888ccd0 (patch) | |
tree | cc049a23b2c994f910d3101860bc1c2ecb7aa35f /block/elevator.c | |
parent | 3d3c2379feb177a5fd55bb0ed76776dc9d4f3243 (diff) | |
download | linux-7e5a8794492e43e9eebb68a98a23be055888ccd0.tar.gz linux-7e5a8794492e43e9eebb68a98a23be055888ccd0.tar.bz2 linux-7e5a8794492e43e9eebb68a98a23be055888ccd0.zip |
block, cfq: move io_cq exit/release to blk-ioc.c
With kmem_cache managed by blk-ioc, io_cq exit/release can be moved to
blk-ioc too. The odd ->io_cq->exit/release() callbacks are replaced
with elevator_ops->elevator_exit_icq_fn() with unlinking from both ioc
and q, and freeing automatically handled by blk-ioc. The elevator
operation only need to perform exit operation specific to the elevator
- in cfq's case, exiting the cfqq's.
Also, clearing of io_cq's on q detach is moved to block core and
automatically performed on elevator switch and q release.
Because the q io_cq points to might be freed before RCU callback for
the io_cq runs, blk-ioc code should remember to which cache the io_cq
needs to be freed when the io_cq is released. New field
io_cq->__rcu_icq_cache is added for this purpose. As both the new
field and rcu_head are used only after io_cq is released and the
q/ioc_node fields aren't, they are put into unions.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/elevator.c')
-rw-r--r-- | block/elevator.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/block/elevator.c b/block/elevator.c index cca049fb45c8..91e18f8af9be 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -979,8 +979,9 @@ static int elevator_switch(struct request_queue *q, struct elevator_type *new_e) goto fail_register; } - /* done, replace the old one with new one and turn off BYPASS */ + /* done, clear io_cq's, switch elevators and turn off BYPASS */ spin_lock_irq(q->queue_lock); + ioc_clear_queue(q); old_elevator = q->elevator; q->elevator = e; spin_unlock_irq(q->queue_lock); |