summaryrefslogtreecommitdiffstats
path: root/include/scsi
diff options
context:
space:
mode:
authorChristoph Hellwig <hch@lst.de>2014-04-11 19:07:01 +0200
committerChristoph Hellwig <hch@lst.de>2014-07-25 07:43:45 -0400
commit71e75c97f97a9645d25fbf3d8e4165a558f18747 (patch)
treefb85185386af55199c46499dc3ce366d227870e1 /include/scsi
parent74665016086615bbaa3fa6f83af410a0a4e029ee (diff)
downloadlinux-stable-71e75c97f97a9645d25fbf3d8e4165a558f18747.tar.gz
linux-stable-71e75c97f97a9645d25fbf3d8e4165a558f18747.tar.bz2
linux-stable-71e75c97f97a9645d25fbf3d8e4165a558f18747.zip
scsi: convert device_busy to atomic_t
Avoid taking the queue_lock to check the per-device queue limit. Instead we do an atomic_inc_return early on to grab our slot in the queue, and if necessary decrement it after finishing all checks. Unlike the host and target busy counters this doesn't allow us to avoid the queue_lock in the request_fn due to the way the interface works, but it'll allow us to prepare for using the blk-mq code, which doesn't use the queue_lock at all, and it at least avoids a queue_lock round trip in scsi_device_unbusy, which is still important given how busy the queue_lock is. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
Diffstat (limited to 'include/scsi')
-rw-r--r--include/scsi/scsi_device.h4
1 files changed, 1 insertions, 3 deletions
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index 4e078b63a9e5..3329901c7243 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -81,9 +81,7 @@ struct scsi_device {
struct list_head siblings; /* list of all devices on this host */
struct list_head same_target_siblings; /* just the devices sharing same target id */
- /* this is now protected by the request_queue->queue_lock */
- unsigned int device_busy; /* commands actually active on
- * low-level. protected by queue_lock. */
+ atomic_t device_busy; /* commands actually active on LLDD */
spinlock_t list_lock;
struct list_head cmd_list; /* queue of in use SCSI Command structures */
struct list_head starved_entry;