diff options
author | Jan Kara <jack@suse.cz> | 2013-09-16 08:24:26 -0400 |
---|---|---|
committer | Theodore Ts'o <tytso@mit.edu> | 2013-09-16 08:24:26 -0400 |
commit | 9c12a831d73dd938a22418d70b39aed4feb4bdf2 (patch) | |
tree | 4de915210213ff49efd637e35c176cd4e09d426d /fs/ext4 | |
parent | ad4eec613536dc7e5ea0c6e59849e6edca634d8b (diff) | |
download | linux-9c12a831d73dd938a22418d70b39aed4feb4bdf2.tar.gz linux-9c12a831d73dd938a22418d70b39aed4feb4bdf2.tar.bz2 linux-9c12a831d73dd938a22418d70b39aed4feb4bdf2.zip |
ext4: fix performance regression in writeback of random writes
The Linux Kernel Performance project guys have reported that commit
4e7ea81db5 introduces a performance regression for the following fio
workload:
[global]
direct=0
ioengine=mmap
size=1500M
bs=4k
pre_read=1
numjobs=1
overwrite=1
loops=5
runtime=300
group_reporting
invalidate=0
directory=/mnt/
file_service_type=random:36
file_service_type=random:36
[job0]
startdelay=0
rw=randrw
filename=data0/f1:data0/f2
[job1]
startdelay=0
rw=randrw
filename=data0/f2:data0/f1
...
[job7]
startdelay=0
rw=randrw
filename=data0/f2:data0/f1
The culprit of the problem is that after the commit ext4_writepages()
are more aggressive in writing back pages. Thus we have less consecutive
dirty pages resulting in more seeking.
This increased aggressivity is caused by a bug in the condition
terminating ext4_writepages(). We start writing from the beginning of
the file even if we should have terminated ext4_writepages() because
wbc->nr_to_write <= 0.
After fixing the condition the throughput of the fio workload is about 20%
better than before writeback reorganization.
Reported-by: "Yan, Zheng" <zheng.z.yan@intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Diffstat (limited to 'fs/ext4')
-rw-r--r-- | fs/ext4/inode.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 9115f2807515..4cf2619f007c 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2559,7 +2559,7 @@ retry: break; } blk_finish_plug(&plug); - if (!ret && !cycled) { + if (!ret && !cycled && wbc->nr_to_write > 0) { cycled = 1; mpd.last_page = writeback_index - 1; mpd.first_page = 0; |