diff options
author | Uladzislau Rezki (Sony) <urezki@gmail.com> | 2021-06-28 19:40:11 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2021-06-29 10:53:52 -0700 |
commit | a2afc59fb25027749bd41c44f47382522232019e (patch) | |
tree | aa5f7ccac4ca6323be95a0450fa107559dc46eb1 /include/linux/gfp.h | |
parent | 53d884a6675b0fd7bc8c7b4afd6ead6f17bc4c61 (diff) | |
download | linux-a2afc59fb25027749bd41c44f47382522232019e.tar.gz linux-a2afc59fb25027749bd41c44f47382522232019e.tar.bz2 linux-a2afc59fb25027749bd41c44f47382522232019e.zip |
mm/page_alloc: add an alloc_pages_bulk_array_node() helper
Patch series "vmalloc() vs bulk allocator", v2.
This patch (of 3):
Add a "node" variant of the alloc_pages_bulk_array() function. The helper
guarantees that a __alloc_pages_bulk() is invoked with a valid NUMA node
ID.
Link: https://lkml.kernel.org/r/20210516202056.2120-1-urezki@gmail.com
Link: https://lkml.kernel.org/r/20210516202056.2120-2-urezki@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include/linux/gfp.h')
-rw-r--r-- | include/linux/gfp.h | 9 |
1 files changed, 9 insertions, 0 deletions
diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 11da8af06704..94f0b8b1cb55 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -536,6 +536,15 @@ alloc_pages_bulk_array(gfp_t gfp, unsigned long nr_pages, struct page **page_arr return __alloc_pages_bulk(gfp, numa_mem_id(), NULL, nr_pages, NULL, page_array); } +static inline unsigned long +alloc_pages_bulk_array_node(gfp_t gfp, int nid, unsigned long nr_pages, struct page **page_array) +{ + if (nid == NUMA_NO_NODE) + nid = numa_mem_id(); + + return __alloc_pages_bulk(gfp, nid, NULL, nr_pages, NULL, page_array); +} + /* * Allocate pages, preferring the node given as nid. The node must be valid and * online. For more general interface, see alloc_pages_node(). |