summaryrefslogtreecommitdiffstats
path: root/fs/afs
diff options
context:
space:
mode:
authorMarc Dionne <marc.dionne@auristor.com>2021-06-06 21:21:27 +0100
committerLinus Torvalds <torvalds@linux-foundation.org>2021-06-07 12:56:05 -0700
commitdc2557308ede6bd8a91409fe196ba4b081567809 (patch)
tree515be7f641ff057a65c2e6c85fcbfb86efba7b64 /fs/afs
parent614124bea77e452aa6df7a8714e8bc820b489922 (diff)
downloadlinux-stable-dc2557308ede6bd8a91409fe196ba4b081567809.tar.gz
linux-stable-dc2557308ede6bd8a91409fe196ba4b081567809.tar.bz2
linux-stable-dc2557308ede6bd8a91409fe196ba4b081567809.zip
afs: Fix partial writeback of large files on fsync and close
In commit e87b03f5830e ("afs: Prepare for use of THPs"), the return value for afs_write_back_from_locked_page was changed from a number of pages to a length in bytes. The loop in afs_writepages_region uses the return value to compute the index that will be used to find dirty pages in the next iteration, but treats it as a number of pages and wrongly multiplies it by PAGE_SIZE. This gives a very large index value, potentially skipping any dirty data that was not covered in the first pass, which is limited to 256M. This causes fsync(), and indirectly close(), to only do a partial writeback of a large file's dirty data. The rest is eventually written back by background threads after dirty_expire_centisecs. Fixes: e87b03f5830e ("afs: Prepare for use of THPs") Signed-off-by: Marc Dionne <marc.dionne@auristor.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeffrey Altman <jaltman@auristor.com> cc: linux-afs@lists.infradead.org Link: https://lore.kernel.org/r/20210604175504.4055-1-marc.c.dionne@gmail.com/ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'fs/afs')
-rw-r--r--fs/afs/write.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 3edb6204b937..a523bb86915d 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -730,7 +730,7 @@ static int afs_writepages_region(struct address_space *mapping,
return ret;
}
- start += ret * PAGE_SIZE;
+ start += ret;
cond_resched();
} while (wbc->nr_to_write > 0);