diff options
author | Will Deacon <will@kernel.org> | 2021-01-27 23:53:43 +0000 |
---|---|---|
committer | Peter Zijlstra <peterz@infradead.org> | 2021-01-29 20:02:28 +0100 |
commit | ae8eba8b5d723a4ca543024b6e51f4d0f4fb6b6b (patch) | |
tree | 2161b6f2e88b10c777d793dff2e7359c2b89c370 /mm/mmu_gather.c | |
parent | 912efa17e5121693dfbadae29768f4144a3f9e62 (diff) | |
download | linux-ae8eba8b5d723a4ca543024b6e51f4d0f4fb6b6b.tar.gz linux-ae8eba8b5d723a4ca543024b6e51f4d0f4fb6b6b.tar.bz2 linux-ae8eba8b5d723a4ca543024b6e51f4d0f4fb6b6b.zip |
tlb: mmu_gather: Remove unused start/end arguments from tlb_finish_mmu()
Since commit 7a30df49f63a ("mm: mmu_gather: remove __tlb_reset_range()
for force flush"), the 'start' and 'end' arguments to tlb_finish_mmu()
are no longer used, since we flush the whole mm in case of a nested
invalidation.
Remove the unused arguments and update all callers.
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Yu Zhao <yuzhao@google.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lkml.kernel.org/r/20210127235347.1402-3-will@kernel.org
Diffstat (limited to 'mm/mmu_gather.c')
-rw-r--r-- | mm/mmu_gather.c | 5 |
1 files changed, 1 insertions, 4 deletions
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 03c33c93a582..b0be5a7aa08f 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -290,14 +290,11 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, /** * tlb_finish_mmu - finish an mmu_gather structure * @tlb: the mmu_gather structure to finish - * @start: start of the region that will be removed from the page-table - * @end: end of the region that will be removed from the page-table * * Called at the end of the shootdown operation to free up any resources that * were required. */ -void tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end) +void tlb_finish_mmu(struct mmu_gather *tlb) { /* * If there are parallel threads are doing PTE changes on same range |