summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorOscar Salvador <osalvador@suse.de>2021-05-04 18:35:20 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2021-05-05 11:27:22 -0700
commit9f27b34f234da7a185b4f1a2aa2cea2c47c458bf (patch)
treebe840f1c932688228ef125a2417ee383a14f9ab1
parentc2ad7a1ffeafa32eb3b3b99835f210ad402a86ff (diff)
downloadlinux-9f27b34f234da7a185b4f1a2aa2cea2c47c458bf.tar.gz
linux-9f27b34f234da7a185b4f1a2aa2cea2c47c458bf.tar.bz2
linux-9f27b34f234da7a185b4f1a2aa2cea2c47c458bf.zip
mm,hugetlb: drop clearing of flag from prep_new_huge_page
Pages allocated via the page allocator or CMA get its private field cleared by means of post_alloc_hook(). Pages allocated during boot, that is directly from the memblock allocator, get cleared by paging_init()-> .. ->memmap_init_zone-> .. ->__init_single_page() before any memblock allocation. Based on this ground, let us remove the clearing of the flag from prep_new_huge_page() as it is not needed. This was a leftover from commit 6c0371490140 ("hugetlb: convert PageHugeFreed to HPageFreed flag"). Previously the explicit clearing was necessary because compound allocations do not get this initialization (see prep_compound_page). Link: https://lkml.kernel.org/r/20210419075413.1064-4-osalvador@suse.de Signed-off-by: Oscar Salvador <osalvador@suse.de> Acked-by: Michal Hocko <mhocko@suse.com> Reviewed-by: David Hildenbrand <david@redhat.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--mm/hugetlb.c1
1 files changed, 0 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 017cb260354c..c1539fb95f51 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1494,7 +1494,6 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
spin_lock_irq(&hugetlb_lock);
h->nr_huge_pages++;
h->nr_huge_pages_node[nid]++;
- ClearHPageFreed(page);
spin_unlock_irq(&hugetlb_lock);
}