diff options
author | Jan Kara <jack@suse.cz> | 2021-01-28 19:19:45 +0100 |
---|---|---|
committer | Jan Kara <jack@suse.cz> | 2021-07-13 13:14:27 +0200 |
commit | 730633f0b7f951726e87f912a6323641f674ae34 (patch) | |
tree | 1c4a6eb5ddbc0c28e6d37a1418ec259cb6daef27 /mm/readahead.c | |
parent | c625b4cc57d078b03fd8aa4d86c99d584a1782be (diff) | |
download | linux-730633f0b7f951726e87f912a6323641f674ae34.tar.gz linux-730633f0b7f951726e87f912a6323641f674ae34.tar.bz2 linux-730633f0b7f951726e87f912a6323641f674ae34.zip |
mm: Protect operations adding pages to page cache with invalidate_lock
Currently, serializing operations such as page fault, read, or readahead
against hole punching is rather difficult. The basic race scheme is
like:
fallocate(FALLOC_FL_PUNCH_HOLE) read / fault / ..
truncate_inode_pages_range()
<create pages in page
cache here>
<update fs block mapping and free blocks>
Now the problem is in this way read / page fault / readahead can
instantiate pages in page cache with potentially stale data (if blocks
get quickly reused). Avoiding this race is not simple - page locks do
not work because we want to make sure there are *no* pages in given
range. inode->i_rwsem does not work because page fault happens under
mmap_sem which ranks below inode->i_rwsem. Also using it for reads makes
the performance for mixed read-write workloads suffer.
So create a new rw_semaphore in the address_space - invalidate_lock -
that protects adding of pages to page cache for page faults / reads /
readahead.
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jan Kara <jack@suse.cz>
Diffstat (limited to 'mm/readahead.c')
-rw-r--r-- | mm/readahead.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/mm/readahead.c b/mm/readahead.c index d589f147f4c2..41b75d76d36e 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -192,6 +192,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, */ unsigned int nofs = memalloc_nofs_save(); + filemap_invalidate_lock_shared(mapping); /* * Preallocate as many pages as we will need. */ @@ -236,6 +237,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, * will then handle the error. */ read_pages(ractl, &page_pool, false); + filemap_invalidate_unlock_shared(mapping); memalloc_nofs_restore(nofs); } EXPORT_SYMBOL_GPL(page_cache_ra_unbounded); |