diff options
author | Jianchao Wang <jianchao.w.wang@oracle.com> | 2018-08-21 15:15:04 +0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2018-08-21 09:02:56 -0600 |
commit | f5bbbbe4d63577026f908a809f22f5fd5a90ea1f (patch) | |
tree | a6d8a1e7329b4160285b79a0565a45b53a2809f6 /block | |
parent | d48ece209f82c9ce07be942441b53d3fa3664936 (diff) | |
download | linux-f5bbbbe4d63577026f908a809f22f5fd5a90ea1f.tar.gz linux-f5bbbbe4d63577026f908a809f22f5fd5a90ea1f.tar.bz2 linux-f5bbbbe4d63577026f908a809f22f5fd5a90ea1f.zip |
blk-mq: sync the update nr_hw_queues with blk_mq_queue_tag_busy_iter
For blk-mq, part_in_flight/rw will invoke blk_mq_in_flight/rw to
account the inflight requests. It will access the queue_hw_ctx and
nr_hw_queues w/o any protection. When updating nr_hw_queues and
blk_mq_in_flight/rw occur concurrently, panic comes up.
Before update nr_hw_queues, the q will be frozen. So we could use
q_usage_counter to avoid the race. percpu_ref_is_zero is used here
so that we will not miss any in-flight request. The access to
nr_hw_queues and queue_hw_ctx in blk_mq_queue_tag_busy_iter are
under rcu critical section, __blk_mq_update_nr_hw_queues could use
synchronize_rcu to ensure the zeroed q_usage_counter to be globally
visible.
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r-- | block/blk-mq-tag.c | 14 | ||||
-rw-r--r-- | block/blk-mq.c | 4 |
2 files changed, 17 insertions, 1 deletions
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index c0c4e63583ae..8c5cc115b3f8 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -320,6 +320,18 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, struct blk_mq_hw_ctx *hctx; int i; + /* + * __blk_mq_update_nr_hw_queues will update the nr_hw_queues and + * queue_hw_ctx after freeze the queue. So we could use q_usage_counter + * to avoid race with it. __blk_mq_update_nr_hw_queues will users + * synchronize_rcu to ensure all of the users go out of the critical + * section below and see zeroed q_usage_counter. + */ + rcu_read_lock(); + if (percpu_ref_is_zero(&q->q_usage_counter)) { + rcu_read_unlock(); + return; + } queue_for_each_hw_ctx(q, hctx, i) { struct blk_mq_tags *tags = hctx->tags; @@ -335,7 +347,7 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn, bt_for_each(hctx, &tags->breserved_tags, fn, priv, true); bt_for_each(hctx, &tags->bitmap_tags, fn, priv, false); } - + rcu_read_unlock(); } static int bt_alloc(struct sbitmap_queue *bt, unsigned int depth, diff --git a/block/blk-mq.c b/block/blk-mq.c index 9c8c8c71a13f..81cb84b17b73 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2977,6 +2977,10 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, list_for_each_entry(q, &set->tag_list, tag_set_list) blk_mq_freeze_queue(q); /* + * Sync with blk_mq_queue_tag_busy_iter. + */ + synchronize_rcu(); + /* * Switch IO scheduler to 'none', cleaning up the data associated * with the previous scheduler. We will switch back once we are done * updating the new sw to hw queue mappings. |