diff options
author | Christoph Hellwig <hch@lst.de> | 2024-04-22 13:20:18 +0200 |
---|---|---|
committer | Chandan Babu R <chandanbabu@kernel.org> | 2024-04-22 18:00:49 +0530 |
commit | bd1753d8c42b6bd5d9a81c81d1ce6e3affe3a59f (patch) | |
tree | 3c1483cfedd3bc34e939999afa62f61f7a12fba4 /fs/xfs | |
parent | da2b9c3a8d2cbdeec3f13cebf4c6c86c13e1077e (diff) | |
download | linux-stable-bd1753d8c42b6bd5d9a81c81d1ce6e3affe3a59f.tar.gz linux-stable-bd1753d8c42b6bd5d9a81c81d1ce6e3affe3a59f.tar.bz2 linux-stable-bd1753d8c42b6bd5d9a81c81d1ce6e3affe3a59f.zip |
xfs: stop the steal (of data blocks for RT indirect blocks)
When xfs_bmap_del_extent_delay has to split an indirect block it tries
to steal blocks from the the part that gets unmapped to increase the
indirect block reservation that now needs to cover for two extents
instead of one.
This works perfectly fine on the data device, where the data and
indirect blocks come from the same pool. It has no chance of working
when the inode sits on the RT device. To support re-enabling delalloc
for inodes on the RT device, make this behavior conditional on not
being for rt extents.
Note that split of delalloc extents should only happen on writeback
failure, as for other kinds of hole punching we first write back all
data and thus convert the delalloc reservations covering the hole to
a real allocation.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
Signed-off-by: Chandan Babu R <chandanbabu@kernel.org>
Diffstat (limited to 'fs/xfs')
-rw-r--r-- | fs/xfs/libxfs/xfs_bmap.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c index fe33254ca390..8a1446e025e0 100644 --- a/fs/xfs/libxfs/xfs_bmap.c +++ b/fs/xfs/libxfs/xfs_bmap.c @@ -4983,9 +4983,14 @@ xfs_bmap_del_extent_delay( /* * Steal as many blocks as we can to try and satisfy the worst * case indlen for both new extents. + * + * However, we can't just steal reservations from the data + * blocks if this is an RT inodes as the data and metadata + * blocks come from different pools. We'll have to live with + * under-filled indirect reservation in this case. */ da_new = got_indlen + new_indlen; - if (da_new > da_old) { + if (da_new > da_old && !isrt) { stolen = XFS_FILBLKS_MIN(da_new - da_old, del->br_blockcount); da_old += stolen; |