summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorSergey Senozhatsky <sergey.senozhatsky@gmail.com>2015-09-08 15:04:52 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2015-09-08 15:35:28 -0700
commitb3e237f1f5a86030c875e186ff19640f4f4f3c63 (patch)
tree82747ec2fcce63b70ab3a2decfad2c8126b6108a /mm
parent6cbf16b3b66a61b9c6df8f2ed4ac346cb427f28a (diff)
downloadlinux-stable-b3e237f1f5a86030c875e186ff19640f4f4f3c63.tar.gz
linux-stable-b3e237f1f5a86030c875e186ff19640f4f4f3c63.tar.bz2
linux-stable-b3e237f1f5a86030c875e186ff19640f4f4f3c63.zip
zsmalloc: do not take class lock in zs_shrinker_count()
We can avoid taking class ->lock around zs_can_compact() in zs_shrinker_count(), because the number that we return back is outdated in general case, by design. We have different sources that are able to change class's state right after we return from zs_can_compact() -- ongoing I/O operations, manually triggered compaction, or two of them happening simultaneously. We re-do this calculations during compaction on a per class basis anyway. zs_unregister_shrinker() will not return until we have an active shrinker, so classes won't unexpectedly disappear while zs_shrinker_count() iterates them. Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/zsmalloc.c4
1 files changed, 0 insertions, 4 deletions
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index ce08d043becd..c19b99c8a457 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1710,8 +1710,6 @@ static struct page *isolate_source_page(struct size_class *class)
*
* Based on the number of unused allocated objects calculate
* and return the number of pages that we can free.
- *
- * Should be called under class->lock.
*/
static unsigned long zs_can_compact(struct size_class *class)
{
@@ -1834,9 +1832,7 @@ static unsigned long zs_shrinker_count(struct shrinker *shrinker,
if (class->index != i)
continue;
- spin_lock(&class->lock);
pages_to_free += zs_can_compact(class);
- spin_unlock(&class->lock);
}
return pages_to_free;