diff options
author | Andrew Bresticker <abrestic@google.com> | 2011-11-02 13:40:29 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-11-02 16:07:03 -0700 |
commit | c1e2ee2dc436574880758b3836fc96935b774c32 (patch) | |
tree | aa496a9ba20e06749194faa4dbb14b6046e6b06b /samples | |
parent | 080d676de095a14ecba14c0b9a91acb5bbb634df (diff) | |
download | linux-c1e2ee2dc436574880758b3836fc96935b774c32.tar.gz linux-c1e2ee2dc436574880758b3836fc96935b774c32.tar.bz2 linux-c1e2ee2dc436574880758b3836fc96935b774c32.zip |
memcg: replace ss->id_lock with a rwlock
While back-porting Johannes Weiner's patch "mm: memcg-aware global
reclaim" for an internal effort, we noticed a significant performance
regression during page-reclaim heavy workloads due to high contention of
the ss->id_lock. This lock protects idr map, and serializes calls to
idr_get_next() in css_get_next() (which is used during the memcg hierarchy
walk).
Since idr_get_next() is just doing a look up, we need only serialize it
with respect to idr_remove()/idr_get_new(). By making the ss->id_lock a
rwlock, contention is greatly reduced and performance improves.
Tested: cat a 256m file from a ramdisk in a 128m container 50 times on
each core (one file + container per core) in parallel on a NUMA machine.
Result is the time for the test to complete in 1 of the containers.
Both kernels included Johannes' memcg-aware global reclaim patches.
Before rwlock patch: 1710.778s
After rwlock patch: 152.227s
Signed-off-by: Andrew Bresticker <abrestic@google.com>
Cc: Paul Menage <menage@gmail.com>
Cc: Li Zefan <lizf@cn.fujitsu.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Ying Han <yinghan@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'samples')
0 files changed, 0 insertions, 0 deletions