summaryrefslogtreecommitdiffstats
path: root/mm/slub.c
diff options
context:
space:
mode:
authorVlastimil Babka <vbabka@suse.cz>2023-09-08 11:47:09 +0200
committerVlastimil Babka <vbabka@suse.cz>2023-10-02 11:55:41 +0200
commit5886fc82b6e3166dd1ba876809888fc39028d626 (patch)
treef7a5224f726ed8ff16f0f7ff121c63645accbb78 /mm/slub.c
parent0fe2735d5e2e00601339aab3658e05f3707a1745 (diff)
downloadlinux-5886fc82b6e3166dd1ba876809888fc39028d626.tar.gz
linux-5886fc82b6e3166dd1ba876809888fc39028d626.tar.bz2
linux-5886fc82b6e3166dd1ba876809888fc39028d626.zip
mm/slub: attempt to find layouts up to 1/2 waste in calculate_order()
The main loop in calculate_order() currently tries to find an order with at most 1/4 waste. If that's impossible (for particular large object sizes), there's a fallback that will try to place one object within slab_max_order. If we expand the loop boundary to also allow up to 1/2 waste as the last resort, we can remove the fallback and simplify the code, as the loop will find an order for such sizes as well. Note we don't need to allow more than 1/2 waste as that will never happen - calc_slab_order() would calculate more objects to fit, reducing waste below 1/2. Successfully finding an order in the loop (compared to the fallback) will also have the benefit in trying to satisfy min_objects, because the fallback was passing 1. Thus the resulting slab orders might be larger (not because it would improve waste, but to reduce pressure on shared locks), which is one of the goals of calculate_order(). For example, with nr_cpus=1 and 4kB PAGE_SIZE, slub_max_order=3, before the patch we would get the following orders for these object sizes: 2056 to 10920 - order-3 as selected by the loop 10928 to 12280 - order-2 due to fallback, as <1/4 waste is not possible 12288 to 32768 - order-3 as <1/4 waste is again possible After the patch: 2056 to 32768 - order-3, because even in the range of 10928 to 12280 we try to satisfy the calculated min_objects. As a result the code is simpler and gives more consistent results. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Reviewed-by: Feng Tang <feng.tang@intel.com> Reviewed-and-tested-by: Jay Patel <jaypatel@linux.ibm.com>
Diffstat (limited to 'mm/slub.c')
-rw-r--r--mm/slub.c14
1 files changed, 4 insertions, 10 deletions
diff --git a/mm/slub.c b/mm/slub.c
index c4b5f48149e8..86141e5164ca 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4171,9 +4171,11 @@ static inline int calculate_order(unsigned int size)
* the order can only result in same or less fractional waste, not more.
*
* If that fails, we increase the acceptable fraction of waste and try
- * again.
+ * again. The last iteration with fraction of 1/2 would effectively
+ * accept any waste and give us the order determined by min_objects, as
+ * long as at least single object fits within slub_max_order.
*/
- for (unsigned int fraction = 16; fraction >= 4; fraction /= 2) {
+ for (unsigned int fraction = 16; fraction > 1; fraction /= 2) {
order = calc_slab_order(size, min_objects, slub_max_order,
fraction);
if (order <= slub_max_order)
@@ -4181,14 +4183,6 @@ static inline int calculate_order(unsigned int size)
}
/*
- * We were unable to place multiple objects in a slab. Now
- * lets see if we can place a single object there.
- */
- order = calc_slab_order(size, 1, slub_max_order, 1);
- if (order <= slub_max_order)
- return order;
-
- /*
* Doh this slab cannot be placed using slub_max_order.
*/
order = get_order(size);