diff options
author | Will Deacon <will@kernel.org> | 2021-01-27 23:53:42 +0000 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2021-01-29 20:02:28 +0100 |
commit | 912efa17e5121693dfbadae29768f4144a3f9e62 (patch) | |
tree | 8f2befff7da747e3c0c8cb135ae4129670e9c625 /arch/ia64 | |
parent | 6ee1d745b7c9fd573fba142a2efdad76a9f1cb04 (diff) | |
download | linux-912efa17e5121693dfbadae29768f4144a3f9e62.tar.gz linux-912efa17e5121693dfbadae29768f4144a3f9e62.tar.bz2 linux-912efa17e5121693dfbadae29768f4144a3f9e62.zip |
mm: proc: Invalidate TLB after clearing soft-dirty page state
Since commit 0758cd830494 ("asm-generic/tlb: avoid potential double
flush"), TLB invalidation is elided in tlb_finish_mmu() if no entries
were batched via the tlb_remove_*() functions. Consequently, the
page-table modifications performed by clear_refs_write() in response to
a write to /proc/<pid>/clear_refs do not perform TLB invalidation.
Although this is fine when simply aging the ptes, in the case of
clearing the "soft-dirty" state we can end up with entries where
pte_write() is false, yet a writable mapping remains in the TLB.
Fix this by avoiding the mmu_gather API altogether: managing both the
'tlb_flush_pending' flag on the 'mm_struct' and explicit TLB
invalidation for the sort-dirty path, much like mprotect() does already.
Fixes: 0758cd830494 ("asm-generic/tlb: avoid potential double flushâ)
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Yu Zhao <yuzhao@google.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lkml.kernel.org/r/20210127235347.1402-2-will@kernel.org
Diffstat (limited to 'arch/ia64')
0 files changed, 0 insertions, 0 deletions