summaryrefslogtreecommitdiffstats
path: root/mm/mempolicy.c
diff options
context:
space:
mode:
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>2020-08-11 18:37:17 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2020-08-12 10:58:02 -0700
commitd92bbc2719bd2be237ee336113b63492a6baca3b (patch)
treef1d571ea966d80dd7bb0538822d4b884c4036274 /mm/mempolicy.c
parentb4b382238ed2f94f0d3860f9120b66404fa99463 (diff)
downloadlinux-stable-d92bbc2719bd2be237ee336113b63492a6baca3b.tar.gz
linux-stable-d92bbc2719bd2be237ee336113b63492a6baca3b.tar.bz2
linux-stable-d92bbc2719bd2be237ee336113b63492a6baca3b.zip
mm/hugetlb: unify migration callbacks
There is no difference between two migration callback functions, alloc_huge_page_node() and alloc_huge_page_nodemask(), except __GFP_THISNODE handling. It's redundant to have two almost similar functions in order to handle this flag. So, this patch tries to remove one by introducing a new argument, gfp_mask, to alloc_huge_page_nodemask(). After introducing gfp_mask argument, it's caller's job to provide correct gfp_mask. So, every callsites for alloc_huge_page_nodemask() are changed to provide gfp_mask. Note that it's safe to remove a node id check in alloc_huge_page_node() since there is no caller passing NUMA_NO_NODE as a node id. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Roman Gushchin <guro@fb.com> Link: http://lkml.kernel.org/r/1594622517-20681-4-git-send-email-iamjoonsoo.kim@lge.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/mempolicy.c')
-rw-r--r--mm/mempolicy.c10
1 files changed, 6 insertions, 4 deletions
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 25b7e412c20b..9ae2b704bdf6 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1068,10 +1068,12 @@ static int migrate_page_add(struct page *page, struct list_head *pagelist,
/* page allocation callback for NUMA node migration */
struct page *alloc_new_node_page(struct page *page, unsigned long node)
{
- if (PageHuge(page))
- return alloc_huge_page_node(page_hstate(compound_head(page)),
- node);
- else if (PageTransHuge(page)) {
+ if (PageHuge(page)) {
+ struct hstate *h = page_hstate(compound_head(page));
+ gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE;
+
+ return alloc_huge_page_nodemask(h, node, NULL, gfp_mask);
+ } else if (PageTransHuge(page)) {
struct page *thp;
thp = alloc_pages_node(node,