summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJia He <hejianet@gmail.com>2016-08-02 14:02:31 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2016-08-20 18:09:24 +0200
commit4733b66d45d4452155a123b12dfeba3edba0facd (patch)
tree6ca1400d8aa7472a0b89a0b8795e890ce8945e70
parent7928de5185f04b970dc9505cb8caa1cb5e46fa07 (diff)
downloadlinux-stable-4733b66d45d4452155a123b12dfeba3edba0facd.tar.gz
linux-stable-4733b66d45d4452155a123b12dfeba3edba0facd.tar.bz2
linux-stable-4733b66d45d4452155a123b12dfeba3edba0facd.zip
mm/hugetlb: avoid soft lockup in set_max_huge_pages()
commit 649920c6ab93429b94bc7c1aa7c0e8395351be32 upstream. In powerpc servers with large memory(32TB), we watched several soft lockups for hugepage under stress tests. The call traces are as follows: 1. get_page_from_freelist+0x2d8/0xd50 __alloc_pages_nodemask+0x180/0xc20 alloc_fresh_huge_page+0xb0/0x190 set_max_huge_pages+0x164/0x3b0 2. prep_new_huge_page+0x5c/0x100 alloc_fresh_huge_page+0xc8/0x190 set_max_huge_pages+0x164/0x3b0 This patch fixes such soft lockups. It is safe to call cond_resched() there because it is out of spin_lock/unlock section. Link: http://lkml.kernel.org/r/1469674442-14848-1-git-send-email-hejianet@gmail.com Signed-off-by: Jia He <hejianet@gmail.com> Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--mm/hugetlb.c4
1 files changed, 4 insertions, 0 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ef6963b577fd..0c31f184daf8 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2170,6 +2170,10 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
* and reducing the surplus.
*/
spin_unlock(&hugetlb_lock);
+
+ /* yield cpu to avoid soft lockup */
+ cond_resched();
+
if (hstate_is_gigantic(h))
ret = alloc_fresh_gigantic_page(h, nodes_allowed);
else