summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorChunguang Xu <brookxu@tencent.com>2020-12-04 11:05:43 +0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2021-01-06 14:56:56 +0100
commitaceb8ae8e3b10503a2b82b17f626c9278fe792b4 (patch)
treed2981aa1cb1e65b5e1bb657b580629a95f91ad3d
parentaff18aa806fd145e620ab9ae264caf3ec270e121 (diff)
downloadlinux-stable-aceb8ae8e3b10503a2b82b17f626c9278fe792b4.tar.gz
linux-stable-aceb8ae8e3b10503a2b82b17f626c9278fe792b4.tar.bz2
linux-stable-aceb8ae8e3b10503a2b82b17f626c9278fe792b4.zip
ext4: avoid s_mb_prefetch to be zero in individual scenarios
[ Upstream commit 82ef1370b0c1757ab4ce29f34c52b4e93839b0aa ] Commit cfd732377221 ("ext4: add prefetching for block allocation bitmaps") introduced block bitmap prefetch, and expects to read block bitmaps of flex_bg through an IO. However, it seems to ignore the value range of s_log_groups_per_flex. In the scenario where the value of s_log_groups_per_flex is greater than 27, s_mb_prefetch or s_mb_prefetch_limit will overflow, cause a divide zero exception. In addition, the logic of calculating nr is also flawed, because the size of flexbg is fixed during a single mount, but s_mb_prefetch can be modified, which causes nr to fail to meet the value condition of [1, flexbg_size]. To solve this problem, we need to set the upper limit of s_mb_prefetch. Since we expect to load block bitmaps of a flex_bg through an IO, we can consider determining a reasonable upper limit among the IO limit parameters. After consideration, we chose BLK_MAX_SEGMENT_SIZE. This is a good choice to solve divide zero problem and avoiding performance degradation. [ Some minor code simplifications to make the changes easy to follow -- TYT ] Reported-by: Tosk Robot <tencent_os_robot@tencent.com> Signed-off-by: Chunguang Xu <brookxu@tencent.com> Reviewed-by: Samuel Liao <samuelliao@tencent.com> Reviewed-by: Andreas Dilger <adilger@dilger.ca> Link: https://lore.kernel.org/r/1607051143-24508-1-git-send-email-brookxu@tencent.com Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Sasha Levin <sashal@kernel.org>
-rw-r--r--fs/ext4/mballoc.c9
1 files changed, 5 insertions, 4 deletions
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 37a619bf1ac7..e67d5de6f28c 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -2395,9 +2395,9 @@ repeat:
nr = sbi->s_mb_prefetch;
if (ext4_has_feature_flex_bg(sb)) {
- nr = (group / sbi->s_mb_prefetch) *
- sbi->s_mb_prefetch;
- nr = nr + sbi->s_mb_prefetch - group;
+ nr = 1 << sbi->s_log_groups_per_flex;
+ nr -= group & (nr - 1);
+ nr = min(nr, sbi->s_mb_prefetch);
}
prefetch_grp = ext4_mb_prefetch(sb, group,
nr, &prefetch_ios);
@@ -2733,7 +2733,8 @@ static int ext4_mb_init_backend(struct super_block *sb)
if (ext4_has_feature_flex_bg(sb)) {
/* a single flex group is supposed to be read by a single IO */
- sbi->s_mb_prefetch = 1 << sbi->s_es->s_log_groups_per_flex;
+ sbi->s_mb_prefetch = min(1 << sbi->s_es->s_log_groups_per_flex,
+ BLK_MAX_SEGMENT_SIZE >> (sb->s_blocksize_bits - 9));
sbi->s_mb_prefetch *= 8; /* 8 prefetch IOs in flight at most */
} else {
sbi->s_mb_prefetch = 32;