summaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorNitesh Shetty <nj.shetty@samsung.com>2018-02-13 21:18:12 +0530
committerJens Axboe <axboe@kernel.dk>2018-02-13 09:12:04 -0700
commit67b4110f8c8d16e588d7730db8e8b01b32c1bd8b (patch)
treeb8564dcbfef53bae1b43777ff84e90abb9d61b72 /block
parent178e834c47b0d01352c48730235aae69898fbc02 (diff)
downloadlinux-67b4110f8c8d16e588d7730db8e8b01b32c1bd8b.tar.gz
linux-67b4110f8c8d16e588d7730db8e8b01b32c1bd8b.tar.bz2
linux-67b4110f8c8d16e588d7730db8e8b01b32c1bd8b.zip
blk: optimization for classic polling
This removes the dependency on interrupts to wake up task. Set task state as TASK_RUNNING, if need_resched() returns true, while polling for IO completion. Earlier, polling task used to sleep, relying on interrupt to wake it up. This made some IO take very long when interrupt-coalescing is enabled in NVMe. Reference: http://lists.infradead.org/pipermail/linux-nvme/2018-February/015435.html Changes since v2->v3: -using __set_current_state() instead of set_current_state() Changes since v1->v2: -setting task state once in blk_poll, instead of multiple callers. Signed-off-by: Nitesh Shetty <nj.shetty@samsung.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r--block/blk-mq.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/block/blk-mq.c b/block/blk-mq.c
index df93102e2149..357492712b0e 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3164,6 +3164,7 @@ static bool __blk_mq_poll(struct blk_mq_hw_ctx *hctx, struct request *rq)
cpu_relax();
}
+ __set_current_state(TASK_RUNNING);
return false;
}