summaryrefslogtreecommitdiffstats
path: root/mm/page_alloc.c
diff options
context:
space:
mode:
authorCharan Teja Kalla <quic_charante@quicinc.com>2023-11-24 16:35:53 +0530
committerAndrew Morton <akpm@linux-foundation.org>2023-12-10 16:51:52 -0800
commit9cd20f3fe045af95a8fe7a12328b21bfd2f3b8bf (patch)
tree9cc9ae4d092f256bd3694d7b16d5b0501e042469 /mm/page_alloc.c
parentd68e39fc45f70e35eb74df2128d315c1d91e4dc4 (diff)
downloadlinux-9cd20f3fe045af95a8fe7a12328b21bfd2f3b8bf.tar.gz
linux-9cd20f3fe045af95a8fe7a12328b21bfd2f3b8bf.tar.bz2
linux-9cd20f3fe045af95a8fe7a12328b21bfd2f3b8bf.zip
mm: page_alloc: enforce minimum zone size to do high atomic reserves
Highatomic reserves are set to roughly 1% of zone for maximum and a pageblock size for minimum. Encountered a system with the below configuration: Normal free:7728kB boost:0kB min:804kB low:1004kB high:1204kB reserved_highatomic:8192KB managed:49224kB On such systems, even a single pageblock makes highatomic reserves are set to ~8% of the zone memory. This high value can easily exert pressure on the zone. Per discussion with Michal and Mel, it is not much useful to reserve the memory for highatomic allocations on such small systems[1]. Since the minimum size for high atomic reserves is always going to be a pageblock size and if 1% of zone managed pages is going to be below pageblock size, don't reserve memory for high atomic allocations. Thanks Michal for this suggestion[2]. Since no memory is being reserved for high atomic allocations and if respective allocation failures are seen, this patch can be reverted. [1] https://lore.kernel.org/linux-mm/20231117161956.d3yjdxhhm4rhl7h2@techsingularity.net/ [2] https://lore.kernel.org/linux-mm/ZVYRJMUitykepLRy@tiehlicka/ Link: https://lkml.kernel.org/r/c3a2a48e2cfe08176a80eaf01c110deb9e918055.1700821416.git.quic_charante@quicinc.com Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com> Acked-by: David Rientjes <rientjes@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Michal Hocko <mhocko@suse.com> Cc: Pavankumar Kondeti <quic_pkondeti@quicinc.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r--mm/page_alloc.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2a272eb108a5..ef8b151edbd0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1881,9 +1881,12 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone)
/*
* The number reserved as: minimum is 1 pageblock, maximum is
- * roughly 1% of a zone.
+ * roughly 1% of a zone. But if 1% of a zone falls below a
+ * pageblock size, then don't reserve any pageblocks.
* Check is race-prone but harmless.
*/
+ if ((zone_managed_pages(zone) / 100) < pageblock_nr_pages)
+ return;
max_managed = ALIGN((zone_managed_pages(zone) / 100), pageblock_nr_pages);
if (zone->nr_reserved_highatomic >= max_managed)
return;