summaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorMing Lei <ming.lei@redhat.com>2020-08-18 17:07:28 +0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2020-09-03 11:26:54 +0200
commit872a2b3182ee94e772ae81f2a10f3118f7b36ffe (patch)
tree4e5750ad9cdbd24e7ebd180e07eb295ec5fecb00 /block
parent9054d58440927f08efa71c3da211acbdd18e634a (diff)
downloadlinux-stable-872a2b3182ee94e772ae81f2a10f3118f7b36ffe.tar.gz
linux-stable-872a2b3182ee94e772ae81f2a10f3118f7b36ffe.tar.bz2
linux-stable-872a2b3182ee94e772ae81f2a10f3118f7b36ffe.zip
blk-mq: insert request not through ->queue_rq into sw/scheduler queue
[ Upstream commit db03f88fae8a2c8007caafa70287798817df2875 ] c616cbee97ae ("blk-mq: punt failed direct issue to dispatch list") supposed to add request which has been through ->queue_rq() to the hw queue dispatch list, however it adds request running out of budget or driver tag to hw queue too. This way basically bypasses request merge, and causes too many request dispatched to LLD, and system% is unnecessary increased. Fixes this issue by adding request not through ->queue_rq into sw/scheduler queue, and this way is safe because no ->queue_rq is called on this request yet. High %system can be observed on Azure storvsc device, and even soft lock is observed. This patch reduces %system during heavy sequential IO, meantime decreases soft lockup risk. Fixes: c616cbee97ae ("blk-mq: punt failed direct issue to dispatch list") Signed-off-by: Ming Lei <ming.lei@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Bart Van Assche <bvanassche@acm.org> Cc: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'block')
-rw-r--r--block/blk-mq.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ae7d31cb5a4e..8f67f0f16ec2 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1869,7 +1869,8 @@ insert:
if (bypass_insert)
return BLK_STS_RESOURCE;
- blk_mq_request_bypass_insert(rq, false, run_queue);
+ blk_mq_sched_insert_request(rq, false, run_queue, false);
+
return BLK_STS_OK;
}