summaryrefslogtreecommitdiffstats
path: root/lib
diff options
context:
space:
mode:
authorDave Chinner <dchinner@redhat.com>2024-04-30 15:28:24 +1000
committerAndrew Morton <akpm@linux-foundation.org>2024-05-19 14:40:44 -0700
commit70c435ca8dcb64e3d7983a30a14484aa163bb2d2 (patch)
tree3dfbbdc2f925ab1c6026f8da4514bff2c693c748 /lib
parent1c00f9368628dde7337defd3699025e3611a816f (diff)
downloadlinux-70c435ca8dcb64e3d7983a30a14484aa163bb2d2.tar.gz
linux-70c435ca8dcb64e3d7983a30a14484aa163bb2d2.tar.bz2
linux-70c435ca8dcb64e3d7983a30a14484aa163bb2d2.zip
stackdepot: use gfp_nested_mask() instead of open coded masking
The stackdepot code is used by KASAN and lockdep for recoding stack traces. Both of these track allocation context information, and so their internal allocations must obey the caller allocation contexts to avoid generating their own false positive warnings that have nothing to do with the code they are instrumenting/tracking. We also don't want recording stack traces to deplete emergency memory reserves - debug code is useless if it creates new issues that can't be replicated when the debug code is disabled. Switch the stackdepot allocation masking to use gfp_nested_mask() to address these issues. gfp_nested_mask() also strips GFP_ZONEMASK naturally, so that greatly simplifies this code. Link: https://lkml.kernel.org/r/20240430054604.4169568-3-david@fromorbit.com Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Marco Elver <elver@google.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Oscar Salvador <osalvador@suse.de> Cc: Andrey Konovalov <andreyknvl@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'lib')
-rw-r--r--lib/stackdepot.c11
1 files changed, 2 insertions, 9 deletions
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index cd8f23455285..5ed34cc963fc 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -624,15 +624,8 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
* we won't be able to do that under the lock.
*/
if (unlikely(can_alloc && !READ_ONCE(new_pool))) {
- /*
- * Zero out zone modifiers, as we don't have specific zone
- * requirements. Keep the flags related to allocation in atomic
- * contexts, I/O, nolockdep.
- */
- alloc_flags &= ~GFP_ZONEMASK;
- alloc_flags &= (GFP_ATOMIC | GFP_KERNEL | __GFP_NOLOCKDEP);
- alloc_flags |= __GFP_NOWARN;
- page = alloc_pages(alloc_flags, DEPOT_POOL_ORDER);
+ page = alloc_pages(gfp_nested_mask(alloc_flags),
+ DEPOT_POOL_ORDER);
if (page)
prealloc = page_address(page);
}