diff options
author | Mikulas Patocka <mpatocka@redhat.com> | 2013-09-18 19:14:22 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2013-10-05 07:00:40 -0700 |
commit | 434b9ee66a512e9b91b9e1687b8183cd48a353fb (patch) | |
tree | 663d588c96246da6782a2827b6969b23dde85f56 | |
parent | 0c61d8a1d51e6f701ee17af5fa33bdeefaa02b75 (diff) | |
download | linux-stable-434b9ee66a512e9b91b9e1687b8183cd48a353fb.tar.gz linux-stable-434b9ee66a512e9b91b9e1687b8183cd48a353fb.tar.bz2 linux-stable-434b9ee66a512e9b91b9e1687b8183cd48a353fb.zip |
dm snapshot: workaround for a false positive lockdep warning
commit 5ea330a75bd86b2b2a01d7b85c516983238306fb upstream.
The kernel reports a lockdep warning if a snapshot is invalidated because
it runs out of space.
The lockdep warning was triggered by commit 0976dfc1d0cd80a4e9dfaf87bd87
("workqueue: Catch more locking problems with flush_work()") in v3.5.
The warning is false positive. The real cause for the warning is that
the lockdep engine treats different instances of md->lock as a single
lock.
This patch is a workaround - we use flush_workqueue instead of flush_work.
This code path is not performance sensitive (it is called only on
initialization or invalidation), thus it doesn't matter that we flush the
whole workqueue.
The real fix for the problem would be to teach the lockdep engine to treat
different instances of md->lock as separate locks.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Acked-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | drivers/md/dm-snap-persistent.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c index e4ecadf0548a..2847a0bcf264 100644 --- a/drivers/md/dm-snap-persistent.c +++ b/drivers/md/dm-snap-persistent.c @@ -251,7 +251,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, int rw, */ INIT_WORK_ONSTACK(&req.work, do_metadata); queue_work(ps->metadata_wq, &req.work); - flush_work(&req.work); + flush_workqueue(ps->metadata_wq); return req.result; } |