summaryrefslogtreecommitdiffstats
path: root/mm/page_counter.c
diff options
context:
space:
mode:
authorBui Quang Minh <minhquangbui99@gmail.com>2022-08-21 22:40:55 +0700
committerAndrew Morton <akpm@linux-foundation.org>2022-09-11 20:26:00 -0700
commit32d772708009eb90f8eeed6ec8f76e06f07e41e9 (patch)
treed6e4c30d01b906e54264ead0ef5a2473a525edeb /mm/page_counter.c
parent8bd3873d1bff1d7b761949deaccc3b527bffa1e3 (diff)
downloadlinux-32d772708009eb90f8eeed6ec8f76e06f07e41e9.tar.gz
linux-32d772708009eb90f8eeed6ec8f76e06f07e41e9.tar.bz2
linux-32d772708009eb90f8eeed6ec8f76e06f07e41e9.zip
mm: skip retry when new limit is not below old one in page_counter_set_max
In page_counter_set_max, we want to make sure the new limit is not below the concurrently-changing counter value. We read the counter and check that the limit is not below the counter before the swap. After the swap, we read the counter again and retry in case the counter is incremented as this may violate the requirement. Even though the page_counter_try_charge can see the old limit, it is guaranteed that the counter is not above the old limit after the increment. So in case the new limit is not below the old limit, the counter is guaranteed to be not above the new limit too. We can skip the retry in this case to optimize a little bit. Link: https://lkml.kernel.org/r/20220821154055.109635-1-minhquangbui99@gmail.com Signed-off-by: Bui Quang Minh <minhquangbui99@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/page_counter.c')
-rw-r--r--mm/page_counter.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/page_counter.c b/mm/page_counter.c
index eb156ff5d603..8a0cc24b60dd 100644
--- a/mm/page_counter.c
+++ b/mm/page_counter.c
@@ -193,7 +193,7 @@ int page_counter_set_max(struct page_counter *counter, unsigned long nr_pages)
old = xchg(&counter->max, nr_pages);
- if (page_counter_read(counter) <= usage)
+ if (page_counter_read(counter) <= usage || nr_pages >= old)
return 0;
counter->max = old;