summaryrefslogtreecommitdiffstats
path: root/mm/page-writeback.c
diff options
context:
space:
mode:
authorNick Piggin <npiggin@suse.de>2009-01-06 14:39:08 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2009-01-06 15:58:59 -0800
commit05fe478dd04e02fa230c305ab9b5616669821dd3 (patch)
tree9b551aad196b66e5c773ed7619386a1bb5e14f41 /mm/page-writeback.c
parent00266770b8b3a6a77f896ca501a0613739086832 (diff)
downloadlinux-stable-05fe478dd04e02fa230c305ab9b5616669821dd3.tar.gz
linux-stable-05fe478dd04e02fa230c305ab9b5616669821dd3.tar.bz2
linux-stable-05fe478dd04e02fa230c305ab9b5616669821dd3.zip
mm: write_cache_pages integrity fix
In write_cache_pages, nr_to_write is heeded even for data-integrity syncs, so the function will return success after writing out nr_to_write pages, even if that was not sufficient to guarantee data integrity. The callers tend to set it to values that could break data interity semantics easily in practice. For example, nr_to_write can be set to mapping->nr_pages * 2, however if a file has a single, dirty page, then fsync is called, subsequent pages might be concurrently added and dirtied, then write_cache_pages might writeout two of these newly dirty pages, while not writing out the old page that should have been written out. Fix this by ignoring nr_to_write if it is a data integrity sync. This is a data integrity bug. The reason this has been done in the past is to avoid stalling sync operations behind page dirtiers. "If a file has one dirty page at offset 1000000000000000 then someone does an fsync() and someone else gets in first and starts madly writing pages at offset 0, we want to write that page at 1000000000000000. Somehow." What we do today is return success after an arbitrary amount of pages are written, whether or not we have provided the data-integrity semantics that the caller has asked for. Even this doesn't actually fix all stall cases completely: in the above situation, if the file has a huge number of pages in pagecache (but not dirty), then mapping->nrpages is going to be huge, even if pages are being dirtied. This change does indeed make the possibility of long stalls lager, and that's not a good thing, but lying about data integrity is even worse. We have to either perform the sync, or return -ELINUXISLAME so at least the caller knows what has happened. There are subsequent competing approaches in the works to solve the stall problems properly, without compromising data integrity. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Chris Mason <chris.mason@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page-writeback.c')
-rw-r--r--mm/page-writeback.c6
1 files changed, 4 insertions, 2 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 2e847cdcad0e..5edca676e2c3 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -963,8 +963,10 @@ retry:
}
}
- if (--nr_to_write <= 0)
- done = 1;
+ if (wbc->sync_mode == WB_SYNC_NONE) {
+ if (--wbc->nr_to_write <= 0)
+ done = 1;
+ }
if (wbc->nonblocking && bdi_write_congested(bdi)) {
wbc->encountered_congestion = 1;
done = 1;