diff options
author | Josef Bacik <jbacik@fb.com> | 2015-03-04 16:52:52 -0500 |
---|---|---|
committer | Josef Bacik <jbacik@fb.com> | 2015-08-18 10:20:09 -0700 |
commit | ac05fbb40062411ea1b722aa2cede7feaa94f1b4 (patch) | |
tree | 302f21a7e8e25efd99010fe09fc2057262eadf19 /fs/inode.c | |
parent | c7f5408493aeb01532927b2276316797a03ed6ee (diff) | |
download | linux-stable-ac05fbb40062411ea1b722aa2cede7feaa94f1b4.tar.gz linux-stable-ac05fbb40062411ea1b722aa2cede7feaa94f1b4.tar.bz2 linux-stable-ac05fbb40062411ea1b722aa2cede7feaa94f1b4.zip |
inode: don't softlockup when evicting inodes
On a box with a lot of ram (148gb) I can make the box softlockup after running
an fs_mark job that creates hundreds of millions of empty files. This is
because we never generate enough memory pressure to keep the number of inodes on
our unused list low, so when we go to unmount we have to evict ~100 million
inodes. This makes one processor a very unhappy person, so add a cond_resched()
in dispose_list() and if we need a resched when processing the s_inodes list do
that and run dispose_list() on what we've currently culled. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Diffstat (limited to 'fs/inode.c')
-rw-r--r-- | fs/inode.c | 14 |
1 files changed, 14 insertions, 0 deletions
diff --git a/fs/inode.c b/fs/inode.c index f09148e07198..78a17b8859e1 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -575,6 +575,7 @@ static void dispose_list(struct list_head *head) list_del_init(&inode->i_lru); evict(inode); + cond_resched(); } } @@ -592,6 +593,7 @@ void evict_inodes(struct super_block *sb) struct inode *inode, *next; LIST_HEAD(dispose); +again: spin_lock(&sb->s_inode_list_lock); list_for_each_entry_safe(inode, next, &sb->s_inodes, i_sb_list) { if (atomic_read(&inode->i_count)) @@ -607,6 +609,18 @@ void evict_inodes(struct super_block *sb) inode_lru_list_del(inode); spin_unlock(&inode->i_lock); list_add(&inode->i_lru, &dispose); + + /* + * We can have a ton of inodes to evict at unmount time given + * enough memory, check to see if we need to go to sleep for a + * bit so we don't livelock. + */ + if (need_resched()) { + spin_unlock(&sb->s_inode_list_lock); + cond_resched(); + dispose_list(&dispose); + goto again; + } } spin_unlock(&sb->s_inode_list_lock); |