diff options
author | Ming Lei <ming.lei@redhat.com> | 2018-04-11 18:47:44 +0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2018-04-11 07:59:15 -0600 |
commit | 2434af79c85d45d41d0c286fedf6e0556888a54c (patch) | |
tree | ce996ca4b97a75ad5cc0d7e3cd37ac0a3101eaed /block | |
parent | 37f9579f4c31a6d698dbf3016d7bf132f9288d30 (diff) | |
download | linux-2434af79c85d45d41d0c286fedf6e0556888a54c.tar.gz linux-2434af79c85d45d41d0c286fedf6e0556888a54c.tar.bz2 linux-2434af79c85d45d41d0c286fedf6e0556888a54c.zip |
blk-mq: Revert "blk-mq: reimplement blk_mq_hw_queue_mapped"
This reverts commit 127276c6ce5a30fcc806b7fe53015f4f89b62956.
When all CPUs of one hw queue become offline, there still may have IOs
not completed from this hctx. But blk_mq_hw_queue_mapped() is called in
blk_mq_queue_tag_busy_iter(), which is used for iterating request in timeout
handler, timeout event will be missed on the inactive hctx, then request may
never be completed.
Also the replementation of blk_mq_hw_queue_mapped() doesn't match the helper's
name any more, and it should have been named as blk_mq_hw_queue_active().
Even other callers need further verification about this reimplemenation.
So revert this patch now, and we can improve hw queue activate/inactivate event
after adequent researching and test.
Cc: Stefan Haberland <sth@linux.vnet.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Christoph Hellwig <hch@lst.de>
Reported-by: Jens Axboe <axboe@kernel.dk>
Fixes: 127276c6ce5a30fcc ("blk-mq: reimplement blk_mq_hw_queue_mapped")
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r-- | block/blk-mq.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/block/blk-mq.h b/block/blk-mq.h index 502af371b83b..88c558f71819 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -181,7 +181,7 @@ static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx) static inline bool blk_mq_hw_queue_mapped(struct blk_mq_hw_ctx *hctx) { - return cpumask_first_and(hctx->cpumask, cpu_online_mask) < nr_cpu_ids; + return hctx->nr_ctx && hctx->tags; } void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part, |