summaryrefslogtreecommitdiffstats
path: root/arch/nios2
diff options
context:
space:
mode:
authorNicholas Piggin <npiggin@gmail.com>2018-11-01 17:42:16 +0800
committerLey Foon Tan <ley.foon.tan@intel.com>2019-03-07 05:29:35 +0800
commitef5cbcb6bfc8cfc7bba58c74c0765c471ef86277 (patch)
tree1dd8e2a0397f872d2d54bd7a3a8c1da948997207 /arch/nios2
parentd5dbb2e8ce6e19a56d14ed24a8e10c3fed5375b4 (diff)
downloadlinux-ef5cbcb6bfc8cfc7bba58c74c0765c471ef86277.tar.gz
linux-ef5cbcb6bfc8cfc7bba58c74c0765c471ef86277.tar.bz2
linux-ef5cbcb6bfc8cfc7bba58c74c0765c471ef86277.zip
nios2: update_mmu_cache clear the old entry from the TLB
Fault paths like do_read_fault will install a Linux pte with the young bit clear. The CPU will fault again because the TLB has not been updated, this time a valid pte exists so handle_pte_fault will just set the young bit with ptep_set_access_flags, which flushes the TLB. The TLB is flushed so the next attempt will go to the fast TLB handler which loads the TLB with the new Linux pte. The access then proceeds. This design is fragile to depend on the young bit being clear after the initial Linux fault. A proposed core mm change to immediately set the young bit upon such a fault, results in ptep_set_access_flags not flushing the TLB because it finds no change to the pte. The spurious fault fix path only flushes the TLB if the access was a store. If it was a load, then this results in an infinite loop of page faults. This change adds a TLB flush in update_mmu_cache, which removes that TLB entry upon the first fault. This will cause the fast TLB handler to load the new pte and avoid the Linux page fault entirely. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Ley Foon Tan <ley.foon.tan@intel.com>
Diffstat (limited to 'arch/nios2')
-rw-r--r--arch/nios2/mm/cacheflush.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c
index 506f6e1c86d5..d58e7e80dc0d 100644
--- a/arch/nios2/mm/cacheflush.c
+++ b/arch/nios2/mm/cacheflush.c
@@ -204,6 +204,8 @@ void update_mmu_cache(struct vm_area_struct *vma,
struct page *page;
struct address_space *mapping;
+ flush_tlb_page(vma, address);
+
if (!pfn_valid(pfn))
return;