diff options
author | Mike Christie <michael.christie@oracle.com> | 2023-03-20 21:06:22 -0500 |
---|---|---|
committer | Michael S. Tsirkin <mst@redhat.com> | 2023-04-21 03:02:30 -0400 |
commit | ced9eb376ab716ec5b06bef8c3e05608b8eb6700 (patch) | |
tree | 195dc0c8936a0ff8e62f418dae25e7d1d35c4267 /drivers/vhost | |
parent | eb1b291466376ad4d2f0a42392a2cd9f0e4983e5 (diff) | |
download | linux-ced9eb376ab716ec5b06bef8c3e05608b8eb6700.tar.gz linux-ced9eb376ab716ec5b06bef8c3e05608b8eb6700.tar.bz2 linux-ced9eb376ab716ec5b06bef8c3e05608b8eb6700.zip |
vhost-scsi: Check for a cleared backend before queueing an event
We currenly hold the vhost_scsi_mutex while clearing the endpoint and
while performing vhost_scsi_do_plug, so tpg->vhost_scsi can't be freed
from uder us, and to make sure anything queued is handled by the
full call in vhost_scsi_clear_endpoint.
This patch removes the need for the vhost_scsi_mutex for the latter
case. In the next patches, we won't hold the vhost_scsi_mutex while
flushing so this patch adds a check for the clearing of the virtqueue
from vhost_scsi_clear_endpoint. We then know that once
vhost_scsi_clear_endpoint has cleared the backend that no new events
will be queued, and the flush after the vhost_vq_set_backend(vq, NULL)
call will see everything that's been queued to that point. So the flush
will then handle all events without the need for the vhost_scsi_mutex.
Signed-off-by: Mike Christie <michael.christie@oracle.com>
Message-Id: <20230321020624.13323-6-michael.christie@oracle.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Diffstat (limited to 'drivers/vhost')
-rw-r--r-- | drivers/vhost/scsi.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index c945136ecf18..ba8097fcea43 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -2010,9 +2010,17 @@ vhost_scsi_do_plug(struct vhost_scsi_tpg *tpg, vq = &vs->vqs[VHOST_SCSI_VQ_EVT].vq; mutex_lock(&vq->mutex); + /* + * We can't queue events if the backend has been cleared, because + * we could end up queueing an event after the flush. + */ + if (!vhost_vq_get_backend(vq)) + goto unlock; + if (vhost_has_feature(vq, VIRTIO_SCSI_F_HOTPLUG)) vhost_scsi_send_evt(vs, tpg, lun, VIRTIO_SCSI_T_TRANSPORT_RESET, reason); +unlock: mutex_unlock(&vq->mutex); } |