From b035f5a6d8521bc77448d1c61db6244d91da3325 Mon Sep 17 00:00:00 2001 From: Catalin Marinas Date: Mon, 12 Jun 2023 16:32:00 +0100 Subject: mm: slab: reduce the kmalloc() minimum alignment if DMA bouncing possible If an architecture opted in to DMA bouncing of unaligned kmalloc() buffers (ARCH_WANT_KMALLOC_DMA_BOUNCE), reduce the minimum kmalloc() cache alignment below cache-line size to ARCH_KMALLOC_MINALIGN. Link: https://lkml.kernel.org/r/20230612153201.554742-17-catalin.marinas@arm.com Signed-off-by: Catalin Marinas Reviewed-by: Vlastimil Babka Tested-by: Isaac J. Manjarres Cc: Christoph Hellwig Cc: Robin Murphy Cc: Alasdair Kergon Cc: Ard Biesheuvel Cc: Arnd Bergmann Cc: Daniel Vetter Cc: Greg Kroah-Hartman Cc: Herbert Xu Cc: Jerry Snitselaar Cc: Joerg Roedel Cc: Jonathan Cameron Cc: Jonathan Cameron Cc: Lars-Peter Clausen Cc: Logan Gunthorpe Cc: Marc Zyngier Cc: Mark Brown Cc: Mike Snitzer Cc: "Rafael J. Wysocki" Cc: Saravana Kannan Cc: Will Deacon Signed-off-by: Andrew Morton --- mm/slab_common.c | 5 +++++ 1 file changed, 5 insertions(+) (limited to 'mm/slab_common.c') diff --git a/mm/slab_common.c b/mm/slab_common.c index 7c6475847fdf..43c008165f56 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -865,6 +866,10 @@ void __init setup_kmalloc_cache_index_table(void) static unsigned int __kmalloc_minalign(void) { +#ifdef CONFIG_DMA_BOUNCE_UNALIGNED_KMALLOC + if (io_tlb_default_mem.nslabs) + return ARCH_KMALLOC_MINALIGN; +#endif return dma_get_cache_alignment(); } -- cgit v1.2.3