summaryrefslogtreecommitdiffstats
path: root/block
diff options
context:
space:
mode:
authorJeff Moyer <jmoyer@redhat.com>2011-10-17 12:57:23 +0200
committerJens Axboe <axboe@kernel.dk>2011-10-24 16:24:31 +0200
commite67b77c791ca2778198c9e7088f3266ed2da7a55 (patch)
tree9c65ce6b5679d1f45fa1e4720430ea17b11fa2aa /block
parent834f9f61a525d2f6d3d0c93894e26326c8d3ceed (diff)
downloadlinux-stable-e67b77c791ca2778198c9e7088f3266ed2da7a55.tar.gz
linux-stable-e67b77c791ca2778198c9e7088f3266ed2da7a55.tar.bz2
linux-stable-e67b77c791ca2778198c9e7088f3266ed2da7a55.zip
blk-flush: move the queue kick into
A dm-multipath user reported[1] a problem when trying to boot a kernel with commit 4853abaae7e4a2af938115ce9071ef8684fb7af4 (block: fix flush machinery for stacking drivers with differring flush flags) applied. It turns out that an empty flush request can be sent into blk_insert_flush. When the BUG_ON was fixed to allow for this, I/O on the underlying device would stall. The reason is that blk_insert_cloned_request does not kick the queue. In the aforementioned commit, I had added a special case to kick the queue if data was sent down but the queue flags did not require a flush. A better solution is to push the queue kick up into blk_insert_cloned_request. This patch, along with a follow-on which fixes the BUG_ON, fixes the issue reported. [1] http://www.redhat.com/archives/dm-devel/2011-September/msg00154.html Reported-by: Christophe Saout <christophe@saout.de> Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Acked-by: Tejun Heo <tj@kernel.org> Stable note: 3.1 Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block')
-rw-r--r--block/blk-core.c2
-rw-r--r--block/blk-flush.c1
2 files changed, 2 insertions, 1 deletions
diff --git a/block/blk-core.c b/block/blk-core.c
index d34433ae7917..795154e54a75 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1725,6 +1725,8 @@ int blk_insert_cloned_request(struct request_queue *q, struct request *rq)
where = ELEVATOR_INSERT_FLUSH;
add_acct_request(q, rq, where);
+ if (where == ELEVATOR_INSERT_FLUSH)
+ __blk_run_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
return 0;
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 89ae3b9bf7ca..720ad607ff91 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -330,7 +330,6 @@ void blk_insert_flush(struct request *rq)
if ((policy & REQ_FSEQ_DATA) &&
!(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
list_add_tail(&rq->queuelist, &q->queue_head);
- blk_run_queue_async(q);
return;
}