summaryrefslogtreecommitdiffstats
path: root/block/blk-mq.h
diff options
context:
space:
mode:
authorMing Lei <ming.lei@redhat.com>2018-07-10 09:03:31 +0800
committerJens Axboe <axboe@kernel.dk>2018-07-17 16:04:00 -0600
commit6ce3dd6eec114930cf2035a8bcb1e80477ed79a8 (patch)
tree6eed5c8628772b9d52db0324593e71eae13aa1b8 /block/blk-mq.h
parent71e9690b59e7349156025a514c29c29ef55b0175 (diff)
downloadlinux-stable-6ce3dd6eec114930cf2035a8bcb1e80477ed79a8.tar.gz
linux-stable-6ce3dd6eec114930cf2035a8bcb1e80477ed79a8.tar.bz2
linux-stable-6ce3dd6eec114930cf2035a8bcb1e80477ed79a8.zip
blk-mq: issue directly if hw queue isn't busy in case of 'none'
In case of 'none' io scheduler, when hw queue isn't busy, it isn't necessary to enqueue request to sw queue and dequeue it from sw queue because request may be submitted to hw queue asap without extra cost, meantime there shouldn't be much request in sw queue, and we don't need to worry about effect on IO merge. There are still some single hw queue SCSI HBAs(HPSA, megaraid_sas, ...) which may connect high performance devices, so 'none' is often required for obtaining good performance. This patch improves IOPS and decreases CPU unilization on megaraid_sas, per Kashyap's test. Cc: Kashyap Desai <kashyap.desai@broadcom.com> Cc: Laurence Oberman <loberman@redhat.com> Cc: Omar Sandoval <osandov@fb.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Bart Van Assche <bart.vanassche@wdc.com> Cc: Hannes Reinecke <hare@suse.de> Reported-by: Kashyap Desai <kashyap.desai@broadcom.com> Tested-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-mq.h')
-rw-r--r--block/blk-mq.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/block/blk-mq.h b/block/blk-mq.h
index bc2b24735ed4..9497b47e2526 100644
--- a/block/blk-mq.h
+++ b/block/blk-mq.h
@@ -64,6 +64,8 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx,
/* Used by blk_insert_cloned_request() to issue request directly */
blk_status_t blk_mq_request_issue_directly(struct request *rq);
+void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+ struct list_head *list);
/*
* CPU -> queue mappings