diff options
author | Tejun Heo <tj@kernel.org> | 2009-05-08 11:54:15 +0900 |
---|---|---|
committer | Jens Axboe <jens.axboe@oracle.com> | 2009-05-11 09:52:17 +0200 |
commit | 296b2f6ae654581adc27f0d6f0af454c7f3d06ee (patch) | |
tree | 8fab2b91741336d41e559a839b547d7ac3090524 /arch | |
parent | fb3ac7f6b811eac8e0aafa3df1c16ed872e898a8 (diff) | |
download | linux-296b2f6ae654581adc27f0d6f0af454c7f3d06ee.tar.gz linux-296b2f6ae654581adc27f0d6f0af454c7f3d06ee.tar.bz2 linux-296b2f6ae654581adc27f0d6f0af454c7f3d06ee.zip |
block: convert to dequeueing model (easy ones)
plat-omap/mailbox, floppy, viocd, mspro_block, i2o_block and
mmc/card/queue are already pretty close to dequeueing model and can be
converted with simple changes. Convert them.
While at it,
* xen-blkfront: !fs check moved downwards to share dequeue call with
normal path.
* mspro_block: __blk_end_request(..., blk_rq_cur_byte()) converted to
__blk_end_request_cur()
* mmc/card/queue: loop of __blk_end_request() converted to
__blk_end_request_all()
[ Impact: dequeue in-flight request ]
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Cc: Alex Dubov <oakad@yahoo.com>
Cc: Markus Lidel <Markus.Lidel@shadowconnect.com>
Cc: Pierre Ossman <drzeus@drzeus.cx>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/arm/plat-omap/mailbox.c | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/arch/arm/plat-omap/mailbox.c b/arch/arm/plat-omap/mailbox.c index 538ba7541d3f..7a1f5c25fd17 100644 --- a/arch/arm/plat-omap/mailbox.c +++ b/arch/arm/plat-omap/mailbox.c @@ -198,6 +198,8 @@ static void mbox_tx_work(struct work_struct *work) spin_lock(q->queue_lock); rq = elv_next_request(q); + if (rq) + blkdev_dequeue_request(rq); spin_unlock(q->queue_lock); if (!rq) @@ -208,6 +210,9 @@ static void mbox_tx_work(struct work_struct *work) ret = __mbox_msg_send(mbox, tx_data->msg, tx_data->arg); if (ret) { enable_mbox_irq(mbox, IRQ_TX); + spin_lock(q->queue_lock); + blk_requeue_request(q, rq); + spin_unlock(q->queue_lock); return; } @@ -238,6 +243,8 @@ static void mbox_rx_work(struct work_struct *work) while (1) { spin_lock_irqsave(q->queue_lock, flags); rq = elv_next_request(q); + if (rq) + blkdev_dequeue_request(rq); spin_unlock_irqrestore(q->queue_lock, flags); if (!rq) break; @@ -345,6 +352,8 @@ omap_mbox_read(struct device *dev, struct device_attribute *attr, char *buf) while (1) { spin_lock_irqsave(q->queue_lock, flags); rq = elv_next_request(q); + if (rq) + blkdev_dequeue_request(rq); spin_unlock_irqrestore(q->queue_lock, flags); if (!rq) |