diff options
author | Shaohua Li <shli@kernel.org> | 2013-12-31 11:38:50 +0800 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2014-01-30 12:57:25 -0700 |
commit | f0276924fa35a3607920a58cf5d878212824b951 (patch) | |
tree | 5759cef09f3ba6b2f206ace779fef298a8b9d7be /block/blk-flush.c | |
parent | d835502f3dacad1638d516ab156d66f0ba377cf5 (diff) | |
download | linux-stable-f0276924fa35a3607920a58cf5d878212824b951.tar.gz linux-stable-f0276924fa35a3607920a58cf5d878212824b951.tar.bz2 linux-stable-f0276924fa35a3607920a58cf5d878212824b951.zip |
blk-mq: Don't reserve a tag for flush request
Reserving a tag (request) for flush to avoid dead lock is a overkill. A
tag is valuable resource. We can track the number of flush requests and
disallow having too many pending flush requests allocated. With this
patch, blk_mq_alloc_request_pinned() could do a busy nop (but not a dead
loop) if too many pending requests are allocated and new flush request
is allocated. But this should not be a problem, too many pending flush
requests are very rare case.
I verified this can fix the deadlock caused by too many pending flush
requests.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'block/blk-flush.c')
-rw-r--r-- | block/blk-flush.c | 8 |
1 files changed, 5 insertions, 3 deletions
diff --git a/block/blk-flush.c b/block/blk-flush.c index 9288aaf35c21..9143e85226c7 100644 --- a/block/blk-flush.c +++ b/block/blk-flush.c @@ -284,9 +284,8 @@ static void mq_flush_work(struct work_struct *work) q = container_of(work, struct request_queue, mq_flush_work); - /* We don't need set REQ_FLUSH_SEQ, it's for consistency */ rq = blk_mq_alloc_request(q, WRITE_FLUSH|REQ_FLUSH_SEQ, - __GFP_WAIT|GFP_ATOMIC, true); + __GFP_WAIT|GFP_ATOMIC, false); rq->cmd_type = REQ_TYPE_FS; rq->end_io = flush_end_io; @@ -408,8 +407,11 @@ void blk_insert_flush(struct request *rq) /* * @policy now records what operations need to be done. Adjust * REQ_FLUSH and FUA for the driver. + * We keep REQ_FLUSH for mq to track flush requests. For !FUA, + * we never dispatch the request directly. */ - rq->cmd_flags &= ~REQ_FLUSH; + if (rq->cmd_flags & REQ_FUA) + rq->cmd_flags &= ~REQ_FLUSH; if (!(fflags & REQ_FUA)) rq->cmd_flags &= ~REQ_FUA; |