summaryrefslogtreecommitdiffstats
path: root/block/blk-core.c
diff options
context:
space:
mode:
authorJens Axboe <jens.axboe@oracle.com>2008-05-07 09:48:17 +0200
committerJens Axboe <jens.axboe@oracle.com>2008-05-07 09:48:17 +0200
commitdbaf2c003e151ad9231778819b0977f95e20e06f (patch)
tree2768a0cd046801d83faf04c408a7d53a2fdfabc5 /block/blk-core.c
parent2cdf79cafbd11580f5b63cd4993b45c1c4952415 (diff)
downloadlinux-dbaf2c003e151ad9231778819b0977f95e20e06f.tar.gz
linux-dbaf2c003e151ad9231778819b0977f95e20e06f.tar.bz2
linux-dbaf2c003e151ad9231778819b0977f95e20e06f.zip
block: optimize generic_unplug_device()
Original patch from Mikulas Patocka <mpatocka@redhat.com> Mike Anderson was doing an OLTP benchmark on a computer with 48 physical disks mapped to one logical device via device mapper. He found that there was a slowdown on request_queue->lock in function generic_unplug_device. The slowdown is caused by the fact that when some code calls unplug on the device mapper, device mapper calls unplug on all physical disks. These unplug calls take the lock, find that the queue is already unplugged, release the lock and exit. With the below patch, performance of the benchmark was increased by 18% (the whole OLTP application, not just block layer microbenchmarks). So I'm submitting this patch for upstream. I think the patch is correct, because when more threads call simultaneously plug and unplug, it is unspecified, if the queue is or isn't plugged (so the patch can't make this worse). And the caller that plugged the queue should unplug it anyway. (if it doesn't, there's 3ms timeout). Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
Diffstat (limited to 'block/blk-core.c')
-rw-r--r--block/blk-core.c8
1 files changed, 5 insertions, 3 deletions
diff --git a/block/blk-core.c b/block/blk-core.c
index b754a4a2f9bd..1b7dddf94f4f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -253,9 +253,11 @@ EXPORT_SYMBOL(__generic_unplug_device);
**/
void generic_unplug_device(struct request_queue *q)
{
- spin_lock_irq(q->queue_lock);
- __generic_unplug_device(q);
- spin_unlock_irq(q->queue_lock);
+ if (blk_queue_plugged(q)) {
+ spin_lock_irq(q->queue_lock);
+ __generic_unplug_device(q);
+ spin_unlock_irq(q->queue_lock);
+ }
}
EXPORT_SYMBOL(generic_unplug_device);