diff options
author | Dave Chinner <dchinner@redhat.com> | 2020-06-30 11:28:53 -0700 |
---|---|---|
committer | Darrick J. Wong <darrick.wong@oracle.com> | 2020-07-06 10:46:58 -0700 |
commit | cd647d5651c0b0deaa26c1acb9e1789437ba9bc7 (patch) | |
tree | b87f7631b707b36fa06e695c119fc3f9dcdd16c3 /fs/xfs/xfs_file.c | |
parent | e2aaee9cd34d8396a48abf0b1be81a464c1d51c5 (diff) | |
download | linux-stable-cd647d5651c0b0deaa26c1acb9e1789437ba9bc7.tar.gz linux-stable-cd647d5651c0b0deaa26c1acb9e1789437ba9bc7.tar.bz2 linux-stable-cd647d5651c0b0deaa26c1acb9e1789437ba9bc7.zip |
xfs: use MMAPLOCK around filemap_map_pages()
The page faultround path ->map_pages is implemented in XFS via
filemap_map_pages(). This function checks that pages found in page
cache lookups have not raced with truncate based invalidation by
checking page->mapping is correct and page->index is within EOF.
However, we've known for a long time that this is not sufficient to
protect against races with invalidations done by operations that do
not change EOF. e.g. hole punching and other fallocate() based
direct extent manipulations. The way we protect against these
races is we wrap the page fault operations in a XFS_MMAPLOCK_SHARED
lock so they serialise against fallocate and truncate before calling
into the filemap function that processes the fault.
Do the same for XFS's ->map_pages implementation to close this
potential data corruption issue.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Amir Goldstein <amir73il@gmail.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Diffstat (limited to 'fs/xfs/xfs_file.c')
-rw-r--r-- | fs/xfs/xfs_file.c | 15 |
1 files changed, 14 insertions, 1 deletions
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index 97aa74800bd9..cc6528726187 100644 --- a/fs/xfs/xfs_file.c +++ b/fs/xfs/xfs_file.c @@ -1263,10 +1263,23 @@ xfs_filemap_pfn_mkwrite( return __xfs_filemap_fault(vmf, PE_SIZE_PTE, true); } +static void +xfs_filemap_map_pages( + struct vm_fault *vmf, + pgoff_t start_pgoff, + pgoff_t end_pgoff) +{ + struct inode *inode = file_inode(vmf->vma->vm_file); + + xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED); + filemap_map_pages(vmf, start_pgoff, end_pgoff); + xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED); +} + static const struct vm_operations_struct xfs_file_vm_ops = { .fault = xfs_filemap_fault, .huge_fault = xfs_filemap_huge_fault, - .map_pages = filemap_map_pages, + .map_pages = xfs_filemap_map_pages, .page_mkwrite = xfs_filemap_page_mkwrite, .pfn_mkwrite = xfs_filemap_pfn_mkwrite, }; |