From b95046b0472f7a805fa28fbcfc7205a76ff7a7d0 Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Wed, 6 Sep 2017 16:20:41 -0700 Subject: mm, sparse, page_ext: drop ugly N_HIGH_MEMORY branches for allocations Commit f52407ce2dea ("memory hotplug: alloc page from other node in memory online") has introduced N_HIGH_MEMORY checks to only use NUMA aware allocations when there is some memory present because the respective node might not have any memory yet at the time and so it could fail or even OOM. Things have changed since then though. Zonelists are now always initialized before we do any allocations even for hotplug (see 959ecc48fc75 ("mm/memory_hotplug.c: fix building of node hotplug zonelist")). Therefore these checks are not really needed. In fact caller of the allocator should never care about whether the node is populated because that might change at any time. Link: http://lkml.kernel.org/r/20170721143915.14161-10-mhocko@kernel.org Signed-off-by: Michal Hocko Acked-by: Vlastimil Babka Cc: Shaohua Li Cc: Joonsoo Kim Cc: Johannes Weiner Cc: Mel Gorman Cc: Toshi Kani Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/sparse.c | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) (limited to 'mm/sparse.c') diff --git a/mm/sparse.c b/mm/sparse.c index 7b4be3fd5cac..a9783acf2bb9 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -65,14 +65,10 @@ static noinline struct mem_section __ref *sparse_index_alloc(int nid) unsigned long array_size = SECTIONS_PER_ROOT * sizeof(struct mem_section); - if (slab_is_available()) { - if (node_state(nid, N_HIGH_MEMORY)) - section = kzalloc_node(array_size, GFP_KERNEL, nid); - else - section = kzalloc(array_size, GFP_KERNEL); - } else { + if (slab_is_available()) + section = kzalloc_node(array_size, GFP_KERNEL, nid); + else section = memblock_virt_alloc_node(array_size, nid); - } return section; } -- cgit v1.2.3 From b4ccec41af82b5a5518c6534444412961894f07c Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Fri, 8 Sep 2017 16:13:15 -0700 Subject: mm/sparse.c: fix typo in online_mem_sections online_mem_sections() accidentally marks online only the first section in the given range. This is a typo which hasn't been noticed because I haven't tested large 2GB blocks previously. All users of pfn_to_online_page would get confused on the the rest of the pfn range in the block. All we need to fix this is to use iterator (pfn) rather than start_pfn. Link: http://lkml.kernel.org/r/20170904112210.3401-1-mhocko@kernel.org Fixes: 2d070eab2e82 ("mm: consider zone which is not fully populated to have holes") Signed-off-by: Michal Hocko Acked-by: Vlastimil Babka Cc: Anshuman Khandual Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/sparse.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'mm/sparse.c') diff --git a/mm/sparse.c b/mm/sparse.c index a9783acf2bb9..83b3bf6461af 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -626,7 +626,7 @@ void online_mem_sections(unsigned long start_pfn, unsigned long end_pfn) unsigned long pfn; for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) { - unsigned long section_nr = pfn_to_section_nr(start_pfn); + unsigned long section_nr = pfn_to_section_nr(pfn); struct mem_section *ms; /* onlining code should never touch invalid ranges */ -- cgit v1.2.3