summaryrefslogtreecommitdiffstats
path: root/arch/mips/include/asm/cacheflush.h
Commit message (Collapse)AuthorAgeFilesLines
* mm: Introduce flush_cache_vmap_early()Alexandre Ghiti2023-12-141-0/+2
| | | | | | | | | | | | | | | | | | | | | The pcpu setup when using the page allocator sets up a new vmalloc mapping very early in the boot process, so early that it cannot use the flush_cache_vmap() function which may depend on structures not yet initialized (for example in riscv, we currently send an IPI to flush other cpus TLB). But on some architectures, we must call flush_cache_vmap(): for example, in riscv, some uarchs can cache invalid TLB entries so we need to flush the new established mapping to avoid taking an exception. So fix this by introducing a new function flush_cache_vmap_early() which is called right after setting the new page table entry and before accessing this new mapping. This new function implements a local flush tlb on riscv and is no-op for other architectures (same as today). Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Dennis Zhou <dennis@kernel.org>
* mm: rationalise flush_icache_pages() and flush_icache_page()Matthew Wilcox (Oracle)2023-08-241-6/+0
| | | | | | | | | | | Move the default (no-op) implementation of flush_icache_pages() to <linux/cacheflush.h> from <asm-generic/cacheflush.h>. Remove the flush_icache_page() wrapper from each architecture into <linux/cacheflush.h>. Link: https://lkml.kernel.org/r/20230802151406.3735276-32-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* mips: implement the new page table range APIMatthew Wilcox (Oracle)2023-08-241-11/+21
| | | | | | | | | | | | | | Rename _PFN_SHIFT to PFN_PTE_SHIFT. Convert a few places to call set_pte() instead of set_pte_at(). Add set_ptes(), update_mmu_cache_range(), flush_icache_pages() and flush_dcache_folio(). Change the PG_arch_1 (aka PG_dcache_dirty) flag from being per-page to per-folio. Link: https://lkml.kernel.org/r/20230802151406.3735276-18-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* MIPS: mm: Remove local_cache_flush_pageThomas Bogendoerfer2023-04-051-1/+0
| | | | | | After ide.h is gone, there are no users of local_cache_flush_page() left. Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
* Add linux/cacheflush.hMatthew Wilcox (Oracle)2021-11-171-2/+0
| | | | | | | | | | | | | | | | | | | | Many architectures do not include asm-generic/cacheflush.h, so turn the includes on their head and add linux/cacheflush.h which includes asm/cacheflush.h. Move the flush_dcache_folio() declaration from asm-generic/cacheflush.h to linux/cacheflush.h and change linux/highmem.h to include linux/cacheflush.h instead of asm/cacheflush.h so that all necessary places will see flush_dcache_folio(). More functions should have their default implementations moved in the future, but those are for follow-on patches. This fixes csky, sparc and sparc64 which were missed in the commit which added flush_dcache_folio(). Fixes: 08b0b0059bf1 ("mm: Add flush_dcache_folio()") Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
* mm: Add flush_dcache_folio()Matthew Wilcox (Oracle)2021-10-181-0/+2
| | | | | | | | | This is a default implementation which calls flush_dcache_page() on each page in the folio. If architectures can do better, they should implement their own version of it. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Vlastimil Babka <vbabka@suse.cz>
* mm: remove flush_kernel_dcache_pageChristoph Hellwig2021-09-031-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | flush_kernel_dcache_page is a rather confusing interface that implements a subset of flush_dcache_page by not being able to properly handle page cache mapped pages. The only callers left are in the exec code as all other previous callers were incorrect as they could have dealt with page cache pages. Replace the calls to flush_kernel_dcache_page with calls to flush_dcache_page, which for all architectures does either exactly the same thing, can contains one or more of the following: 1) an optimization to defer the cache flush for page cache pages not mapped into userspace 2) additional flushing for mapped page cache pages if cache aliases are possible Link: https://lkml.kernel.org/r/20210712060928.4161649-7-hch@lst.de Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Cc: Alex Shi <alexs@kernel.org> Cc: Geoff Levand <geoff@infradead.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Guo Ren <guoren@kernel.org> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Cercueil <paul@crapouillou.net> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Ulf Hansson <ulf.hansson@linaro.org> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Yoshinori Sato <ysato@users.osdn.me> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* MIPS: Delete unused flush_cache_sigtramp()Paul Burton2019-02-071-2/+0
| | | | | | | | | Commit adcc81f148d7 ("MIPS: math-emu: Write-protect delay slot emulation pages") left flush_cache_sigtramp() unused. Delete the dead code. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: linux-mips@vger.kernel.org
* MIPS: c-r4k: Split user/kernel flush_icache_range()James Hogan2016-10-041-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | flush_icache_range() is used for both user addresses (i.e. cacheflush(2)), and kernel addresses (as the API documentation describes). This isn't really suitable however for Enhanced Virtual Addressing (EVA) where cache operations on usermode addresses must use a different instruction, and the protected cache ops assume user addresses, making flush_icache_range() ineffective on kernel addresses. Split out a new __flush_icache_user_range() and __local_flush_icache_user_range() for users which actually want to flush usermode addresses (note that flush_icache_user_range() already exists on various architectures but with different arguments). The implementation of flush_icache_range() will be changed in an upcoming commit to use unprotected normal cache ops so as to always work on the kernel mode address space. Signed-off-by: James Hogan <james.hogan@imgtec.com> Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14152/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Sync icache & dcache in set_pte_atPaul Burton2016-05-131-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's possible for pages to become visible prior to update_mmu_cache running if a thread within the same address space preempts the current thread or runs simultaneously on another CPU. That is, the following scenario is possible: CPU0 CPU1 write to page flush_dcache_page flush_icache_page set_pte_at map page update_mmu_cache If CPU1 maps the page in between CPU0's set_pte_at, which marks it valid & visible, and update_mmu_cache where the dcache flush occurs then CPU1s icache will fill from stale data (unless it fills from the dcache, in which case all is good, but most MIPS CPUs don't have this property). Commit 4d46a67a3eb8 ("MIPS: Fix race condition in lazy cache flushing.") attempted to fix that by performing the dcache flush in flush_icache_page such that it occurs before the set_pte_at call makes the page visible. However it has the problem that not all code that writes to pages exposed to userland call flush_icache_page. There are many callers of set_pte_at under mm/ and only 2 of them do call flush_icache_page. Thus the race window between a page becoming visible & being coherent between the icache & dcache remains open in some cases. To illustrate some of the cases, a WARN was added to __update_cache with this patch applied that triggered in cases where a page about to be flushed from the dcache was not the last page provided to flush_icache_page. That is, backtraces were obtained for cases in which the race window is left open without this patch. The 2 standout examples follow. When forking a process: [ 15.271842] [<80417630>] __update_cache+0xcc/0x188 [ 15.277274] [<80530394>] copy_page_range+0x56c/0x6ac [ 15.282861] [<8042936c>] copy_process.part.54+0xd40/0x17ac [ 15.289028] [<80429f80>] do_fork+0xe4/0x420 [ 15.293747] [<80413808>] handle_sys+0x128/0x14c When exec'ing an ELF binary: [ 14.445964] [<80417630>] __update_cache+0xcc/0x188 [ 14.451369] [<80538d88>] move_page_tables+0x414/0x498 [ 14.457075] [<8055d848>] setup_arg_pages+0x220/0x318 [ 14.462685] [<805b0f38>] load_elf_binary+0x530/0x12a0 [ 14.468374] [<8055ec3c>] search_binary_handler+0xbc/0x214 [ 14.474444] [<8055f6c0>] do_execveat_common+0x43c/0x67c [ 14.480324] [<8055f938>] do_execve+0x38/0x44 [ 14.485137] [<80413808>] handle_sys+0x128/0x14c These code paths write into a page, call flush_dcache_page then call set_pte_at without flush_icache_page inbetween. The end result is that the icache can become corrupted & userland processes may execute unexpected or invalid code, typically resulting in a reserved instruction exception, a trap or a segfault. Fix this race condition fully by performing any cache maintenance required to keep the icache & dcache in sync in set_pte_at, before the page is made valid. This has the added bonus of ensuring the cache maintenance always happens in one location, rather than being duplicated in flush_icache_page & update_mmu_cache. It also matches the way other architectures solve the same problem (see arm, ia64 & powerpc). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Reported-by: Ionela Voinescu <ionela.voinescu@imgtec.com> Cc: Lars Persson <lars.persson@axis.com> Fixes: 4d46a67a3eb8 ("MIPS: Fix race condition in lazy cache flushing.") Cc: Steven J. Hill <sjhill@realitydiluted.com> Cc: David Daney <david.daney@cavium.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: stable <stable@vger.kernel.org> # v4.1+ Patchwork: https://patchwork.linux-mips.org/patch/12722/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Flush dcache for flush_kernel_dcache_pagePaul Burton2016-05-131-0/+1
| | | | | | | | | | | | | | | | | | | The flush_kernel_dcache_page function was previously essentially a nop. This is incorrect for MIPS, where if a page has been modified & either it aliases or it's executable & the icache doesn't fill from dcache then the content needs to be written back from dcache to the next level of the cache hierarchy (which is shared with the icache). Implement this by simply calling flush_dcache_page, treating this kmapped cache flush function (flush_kernel_dcache_page) exactly the same as its non-kmapped counterpart (flush_dcache_page). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Lars Persson <lars.persson@axis.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/12719/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: Fix race condition in lazy cache flushing.Lars Persson2015-03-251-15/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The lazy cache flushing implemented in the MIPS kernel suffers from a race condition that is exposed by do_set_pte() in mm/memory.c. A pre-condition is a file-system that writes to the page from the CPU in its readpage method and then calls flush_dcache_page(). One example is ubifs. Another pre-condition is that the dcache flush is postponed in __flush_dcache_page(). Upon a page fault for an executable mapping not existing in the page-cache, the following will happen: 1. Write to the page 2. flush_dcache_page 3. flush_icache_page 4. set_pte_at 5. update_mmu_cache (commits the flush of a dcache-dirty page) Between steps 4 and 5 another thread can hit the same page and it will encounter a valid pte. Because the data still is in the L1 dcache the CPU will fetch stale data from L2 into the icache and execute garbage. This fix moves the commit of the cache flush to step 3 to close the race window. It also reduces the amount of flushes on non-executable mappings because we never enter __flush_dcache_page() for non-aliasing CPUs. Regressions can occur in drivers that mistakenly relies on the flush_dcache_page() in get_user_pages() for DMA operations. [ralf@linux-mips.org: Folded in patch 9346 to fix highmem issue.] Signed-off-by: Lars Persson <larper@axis.com> Cc: linux-mips@linux-mips.org Cc: paul.burton@imgtec.com Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/9346/ Patchwork: https://patchwork.linux-mips.org/patch/9738/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
* MIPS: add kmap_noncoherent to wire a cached non-coherent TLB entryPaul Burton2014-05-281-0/+6
| | | | | | | | | | This is identical to kmap_coherent apart from the cache coherency attribute used for the TLB entry, so kmap_coherent is abstracted to kmap_prot which is then called for both kmap_coherent & kmap_noncoherent. This will be used by a subsequent patch. Suggested-by: Leonid Yegoshin <leonid.yegoshin@imgtec.com> Signed-off-by: Paul Burton <paul.burton@imgtec.com>
* MIPS: cache: Provide cache flush operations for XFSRalf Baechle2011-10-201-0/+24
| | | | | | | | | | | | Until now flush_kernel_vmap_range() and invalidate_kernel_vmap_range() did not exist on MIPS resulting in heavy cache corruption on XFS filesystems. Left for the post-3.0 time: optimization and make this work with highmem, too. Since the combination of highmem + cache aliases atm doesn't work this isn't a regression. Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Patchwork: https://patchwork.linux-mips.org/patch/2505/
* block: add helpers to run flush_dcache_page() against a bio and a request's ↵Ilya Loginov2009-11-261-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | pages Mtdblock driver doesn't call flush_dcache_page for pages in request. So, this causes problems on architectures where the icache doesn't fill from the dcache or with dcache aliases. The patch fixes this. The ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE symbol was introduced to avoid pointless empty cache-thrashing loops on architectures for which flush_dcache_page() is a no-op. Every architecture was provided with this flush pages on architectires where ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE is equal 1 or do nothing otherwise. See "fix mtd_blkdevs problem with caches on some architectures" discussion on LKML for more information. Signed-off-by: Ilya Loginov <isloginov@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Peter Horton <phorton@bitbox.co.uk> Cc: "Ed L. Cashin" <ecashin@coraid.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
* MIPS: Move headfiles to new location below arch/mips/includeRalf Baechle2008-10-111-0/+116
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>