diff options
author | Tejun Heo <tj@kernel.org> | 2015-08-18 14:54:52 -0700 |
---|---|---|
committer | Jens Axboe <axboe@fb.com> | 2015-08-18 15:49:15 -0700 |
commit | 1ed8d48c57bf7400eac7b8dc622ab0413715cafb (patch) | |
tree | cca6d4773d4f043cb5b9feb766441c7c26401d25 /fs/fs-writeback.c | |
parent | 11743ee0477ab9691d08aa121c583184769d2847 (diff) | |
download | linux-stable-1ed8d48c57bf7400eac7b8dc622ab0413715cafb.tar.gz linux-stable-1ed8d48c57bf7400eac7b8dc622ab0413715cafb.tar.bz2 linux-stable-1ed8d48c57bf7400eac7b8dc622ab0413715cafb.zip |
writeback: bdi_for_each_wb() iteration is memcg ID based not blkcg
wb's (bdi_writeback's) are currently keyed by memcg ID; however, in an
earlier implementation, wb's were keyed by blkcg ID.
bdi_for_each_wb() walks bdi->cgwb_tree in the ascending ID order and
allows iterations to start from an arbitrary ID which is used to
interrupt and resume iterations.
Unfortunately, while changing wb to be keyed by memcg ID instead of
blkcg, bdi_for_each_wb() was missed and is still assuming that wb's
are keyed by blkcg ID. This doesn't affect iterations which don't get
interrupted but bdi_split_work_to_wbs() makes use of iteration
resuming on allocation failures and thus may incorrectly skip or
repeat wb's.
Fix it by changing bdi_for_each_wb() to take memcg IDs instead of
blkcg IDs and updating bdi_split_work_to_wbs() accordingly.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <axboe@fb.com>
Diffstat (limited to 'fs/fs-writeback.c')
-rw-r--r-- | fs/fs-writeback.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index 518c6294bf6c..c9def2115aca 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -839,7 +839,7 @@ static void bdi_split_work_to_wbs(struct backing_dev_info *bdi, bool skip_if_busy) { long nr_pages = base_work->nr_pages; - int next_blkcg_id = 0; + int next_memcg_id = 0; struct bdi_writeback *wb; struct wb_iter iter; @@ -849,14 +849,14 @@ static void bdi_split_work_to_wbs(struct backing_dev_info *bdi, return; restart: rcu_read_lock(); - bdi_for_each_wb(wb, bdi, &iter, next_blkcg_id) { + bdi_for_each_wb(wb, bdi, &iter, next_memcg_id) { if (!wb_has_dirty_io(wb) || (skip_if_busy && writeback_in_progress(wb))) continue; base_work->nr_pages = wb_split_bdi_pages(wb, nr_pages); if (!wb_clone_and_queue_work(wb, base_work)) { - next_blkcg_id = wb->blkcg_css->id + 1; + next_memcg_id = wb->memcg_css->id + 1; rcu_read_unlock(); wb_wait_for_single_work(bdi, base_work); goto restart; |