diff options
author | Nilay Shroff <nilay@linux.ibm.com> | 2024-10-16 08:33:14 +0530 |
---|---|---|
committer | Keith Busch <kbusch@kernel.org> | 2024-10-17 11:07:37 -0700 |
commit | c199fac88fe7c749f88a0653e9f621b9f5a71cf1 (patch) | |
tree | 6d76834fa383a2194594e0ce63a84291070b7051 /drivers/nvme | |
parent | 26bc0a81f64ce00fc4342c38eeb2eddaad084dd2 (diff) | |
download | linux-c199fac88fe7c749f88a0653e9f621b9f5a71cf1.tar.gz linux-c199fac88fe7c749f88a0653e9f621b9f5a71cf1.tar.bz2 linux-c199fac88fe7c749f88a0653e9f621b9f5a71cf1.zip |
nvme-loop: flush off pending I/O while shutting down loop controller
While shutting down loop controller, we first quiesce the admin/IO queue,
delete the admin/IO tag-set and then at last destroy the admin/IO queue.
However it's quite possible that during the window between quiescing and
destroying of the admin/IO queue, some admin/IO request might sneak in
and if that happens then we could potentially encounter a hung task
because shutdown operation can't forward progress until any pending I/O
is flushed off.
This commit helps ensure that before destroying the admin/IO queue, we
unquiesce the admin/IO queue so that any outstanding requests, which are
added after the admin/IO queue is quiesced, are now flushed to its
completion.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
Signed-off-by: Keith Busch <kbusch@kernel.org>
Diffstat (limited to 'drivers/nvme')
-rw-r--r-- | drivers/nvme/target/loop.c | 13 |
1 files changed, 13 insertions, 0 deletions
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c index e32790d8fc26..a9d112d34d4f 100644 --- a/drivers/nvme/target/loop.c +++ b/drivers/nvme/target/loop.c @@ -265,6 +265,13 @@ static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl) { if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags)) return; + /* + * It's possible that some requests might have been added + * after admin queue is stopped/quiesced. So now start the + * queue to flush these requests to the completion. + */ + nvme_unquiesce_admin_queue(&ctrl->ctrl); + nvmet_sq_destroy(&ctrl->queues[0].nvme_sq); nvme_remove_admin_tag_set(&ctrl->ctrl); } @@ -297,6 +304,12 @@ static void nvme_loop_destroy_io_queues(struct nvme_loop_ctrl *ctrl) nvmet_sq_destroy(&ctrl->queues[i].nvme_sq); } ctrl->ctrl.queue_count = 1; + /* + * It's possible that some requests might have been added + * after io queue is stopped/quiesced. So now start the + * queue to flush these requests to the completion. + */ + nvme_unquiesce_io_queues(&ctrl->ctrl); } static int nvme_loop_init_io_queues(struct nvme_loop_ctrl *ctrl) |