diff options
author | Alexandre Ghiti <alexghiti@rivosinc.com> | 2023-12-12 22:34:56 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2024-02-16 19:14:25 +0100 |
commit | 2b89c3f9d3d069924dc1bedd400cd6e93435980c (patch) | |
tree | ed40d29c168de6054db7890b33f1a78239b2e4af /include | |
parent | 686820fe141ea0220fc6fdfc7e5694f915cf64b2 (diff) | |
download | linux-stable-2b89c3f9d3d069924dc1bedd400cd6e93435980c.tar.gz linux-stable-2b89c3f9d3d069924dc1bedd400cd6e93435980c.tar.bz2 linux-stable-2b89c3f9d3d069924dc1bedd400cd6e93435980c.zip |
mm: Introduce flush_cache_vmap_early()
[ Upstream commit 7a92fc8b4d20680e4c20289a670d8fca2d1f2c1b ]
The pcpu setup when using the page allocator sets up a new vmalloc
mapping very early in the boot process, so early that it cannot use the
flush_cache_vmap() function which may depend on structures not yet
initialized (for example in riscv, we currently send an IPI to flush
other cpus TLB).
But on some architectures, we must call flush_cache_vmap(): for example,
in riscv, some uarchs can cache invalid TLB entries so we need to flush
the new established mapping to avoid taking an exception.
So fix this by introducing a new function flush_cache_vmap_early() which
is called right after setting the new page table entry and before
accessing this new mapping. This new function implements a local flush
tlb on riscv and is no-op for other architectures (same as today).
Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Dennis Zhou <dennis@kernel.org>
Stable-dep-of: d9807d60c145 ("riscv: mm: execute local TLB flush after populating vmemmap")
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/asm-generic/cacheflush.h | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 84ec53ccc450..7ee8a179d103 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -91,6 +91,12 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end) } #endif +#ifndef flush_cache_vmap_early +static inline void flush_cache_vmap_early(unsigned long start, unsigned long end) +{ +} +#endif + #ifndef flush_cache_vunmap static inline void flush_cache_vunmap(unsigned long start, unsigned long end) { |