summaryrefslogtreecommitdiffstats
path: root/rust
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2024-07-18 15:08:12 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2024-07-18 15:08:12 -0700
commit76d9b92e68f2bb55890f935c5143f4fef97a935d (patch)
tree6255edb0d3eb0e9378b0e4015d9696466e69154f /rust
parentb2fc97c18614f99179700be263ecbc667c91a4e8 (diff)
parent436381eaf2a423e60fc8340399f7d2458091b383 (diff)
downloadlinux-76d9b92e68f2bb55890f935c5143f4fef97a935d.tar.gz
linux-76d9b92e68f2bb55890f935c5143f4fef97a935d.tar.bz2
linux-76d9b92e68f2bb55890f935c5143f4fef97a935d.zip
Merge tag 'slab-for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab
Pull slab updates from Vlastimil Babka: "The most prominent change this time is the kmem_buckets based hardening of kmalloc() allocations from Kees Cook. We have also extended the kmalloc() alignment guarantees for non-power-of-two sizes in a way that benefits rust. The rest are various cleanups and non-critical fixups. - Dedicated bucket allocator (Kees Cook) This series [1] enhances the probabilistic defense against heap spraying/grooming of CONFIG_RANDOM_KMALLOC_CACHES from last year. kmalloc() users that are known to be useful for exploits can get completely separate set of kmalloc caches that can't be shared with other users. The first converted users are alloc_msg() and memdup_user(). The hardening is enabled by CONFIG_SLAB_BUCKETS. - Extended kmalloc() alignment guarantees (Vlastimil Babka) For years now we have guaranteed natural alignment for power-of-two allocations, but nothing was defined for other sizes (in practice, we have two such buckets, kmalloc-96 and kmalloc-192). To avoid unnecessary padding in the rust layer due to its alignment rules, extend the guarantee so that the alignment is at least the largest power-of-two divisor of the requested size. This fits what rust needs, is a superset of the existing power-of-two guarantee, and does not in practice change the layout (and thus does not add overhead due to padding) of the kmalloc-96 and kmalloc-192 caches, unless slab debugging is enabled for them. - Cleanups and non-critical fixups (Chengming Zhou, Suren Baghdasaryan, Matthew Willcox, Alex Shi, and Vlastimil Babka) Various tweaks related to the new alloc profiling code, folio conversion, debugging and more leftovers after SLAB" Link: https://lore.kernel.org/all/20240701190152.it.631-kees@kernel.org/ [1] * tag 'slab-for-6.11' of git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab: mm/memcg: alignment memcg_data define condition mm, slab: move prepare_slab_obj_exts_hook under CONFIG_MEM_ALLOC_PROFILING mm, slab: move allocation tagging code in the alloc path into a hook mm/util: Use dedicated slab buckets for memdup_user() ipc, msg: Use dedicated slab buckets for alloc_msg() mm/slab: Introduce kmem_buckets_create() and family mm/slab: Introduce kvmalloc_buckets_node() that can take kmem_buckets argument mm/slab: Plumb kmem_buckets into __do_kmalloc_node() mm/slab: Introduce kmem_buckets typedef slab, rust: extend kmalloc() alignment guarantees to remove Rust padding slab: delete useless RED_INACTIVE and RED_ACTIVE slab: don't put freepointer outside of object if only orig_size slab: make check_object() more consistent mm: Reduce the number of slab->folio casts mm, slab: don't wrap internal functions with alloc_hooks()
Diffstat (limited to 'rust')
-rw-r--r--rust/kernel/alloc/allocator.rs19
1 files changed, 6 insertions, 13 deletions
diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs
index 229642960cd1..e6ea601f38c6 100644
--- a/rust/kernel/alloc/allocator.rs
+++ b/rust/kernel/alloc/allocator.rs
@@ -18,23 +18,16 @@ pub(crate) unsafe fn krealloc_aligned(ptr: *mut u8, new_layout: Layout, flags: F
// Customized layouts from `Layout::from_size_align()` can have size < align, so pad first.
let layout = new_layout.pad_to_align();
- let mut size = layout.size();
-
- if layout.align() > bindings::ARCH_SLAB_MINALIGN {
- // The alignment requirement exceeds the slab guarantee, thus try to enlarge the size
- // to use the "power-of-two" size/alignment guarantee (see comments in `kmalloc()` for
- // more information).
- //
- // Note that `layout.size()` (after padding) is guaranteed to be a multiple of
- // `layout.align()`, so `next_power_of_two` gives enough alignment guarantee.
- size = size.next_power_of_two();
- }
+ // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()`
+ // which together with the slab guarantees means the `krealloc` will return a properly aligned
+ // object (see comments in `kmalloc()` for more information).
+ let size = layout.size();
// SAFETY:
// - `ptr` is either null or a pointer returned from a previous `k{re}alloc()` by the
// function safety requirement.
- // - `size` is greater than 0 since it's either a `layout.size()` (which cannot be zero
- // according to the function safety requirement) or a result from `next_power_of_two()`.
+ // - `size` is greater than 0 since it's from `layout.size()` (which cannot be zero according
+ // to the function safety requirement)
unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags.0) as *mut u8 }
}