summaryrefslogtreecommitdiffstats
path: root/crypto
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2022-12-06 11:15:09 -0800
committerAndrew Morton <akpm@linux-foundation.org>2022-12-11 18:12:21 -0800
commitc47454823bd4e3ab34ed3f795afd4479ab938a3f (patch)
tree0c00156416c907cbd1f568a6bbc85d13b21af253 /crypto
parentc7cdf94e9cd7a03549e61b0f85949959191b8a10 (diff)
downloadlinux-stable-c47454823bd4e3ab34ed3f795afd4479ab938a3f.tar.gz
linux-stable-c47454823bd4e3ab34ed3f795afd4479ab938a3f.tar.bz2
linux-stable-c47454823bd4e3ab34ed3f795afd4479ab938a3f.zip
mm: mmu_gather: allow more than one batch of delayed rmaps
Commit 5df397dec7c4 ("mm: delay page_remove_rmap() until after the TLB has been flushed") limited the page batching for the mmu gather operation when a dirty shared page needed to delay rmap removal until after the TLB had been flushed. It did so because it needs to walk that array of pages while still holding the page table lock, and our mmu_gather infrastructure allows for batching quite a lot of pages. We may have thousands on pages queued up for freeing, and we wanted to walk only the last batch if we then added a dirty page to the queue. However, when I limited it to one batch, I didn't think of the degenerate case of the special first batch that is embedded on-stack in the mmu_gather structure (called "local") and that only has eight entries. So with the right pattern, that "limit delayed rmap to just one batch" will trigger over and over in that first small batch, and we'll waste a lot of time flushing TLB's every eight pages. And those right patterns are trivially triggered by just having a shared mappings with lots of adjacent dirty pages. Like the 'page_fault3' subtest of the 'will-it-scale' benchmark, that just maps a shared area, dirties all pages, and unmaps it. Rinse and repeat. We still want to limit the batching, but to fix this (easily triggered) degenerate case, just expand the "only one batch" logic to instead be "only one batch that isn't the special first on-stack ('local') batch". That way, when we need to flush the delayed rmaps, we can still limit our walk to just the last batch - and that first small one. Link: https://lkml.kernel.org/r/CAHk-=whkL5aM1fR7kYUmhHQHBcMUc-bDoFP7EwYjTxy64DGtvw@mail.gmail.com Fixes: 5df397dec7c4 ("mm: delay page_remove_rmap() until after the TLB has been flushed") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Reported-by: kernel test robot <yujie.liu@intel.com> Link: https://lore.kernel.org/oe-lkp/202212051534.852804af-yujie.liu@intel.com Tested-by: Huang, Ying <ying.huang@intel.com> Tested-by: Hugh Dickins <hughd@google.com> Cc: Feng Tang <feng.tang@intel.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Nadav Amit <nadav.amit@gmail.com> Cc: Xing Zhengjun <zhengjun.xing@linux.intel.com> Cc: "Yin, Fengwei" <fengwei.yin@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'crypto')
0 files changed, 0 insertions, 0 deletions