summaryrefslogtreecommitdiffstats
path: root/Documentation/cgroups
diff options
context:
space:
mode:
authorFrederic Weisbecker <fweisbec@gmail.com>2012-05-29 15:07:03 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2012-05-29 16:22:27 -0700
commit2bb2ba9d51a8044a71a29608d2c4ef8f5b2d57a2 (patch)
tree5e58869c606c541d41a9bfa62aa6e8bc42cae5ac /Documentation/cgroups
parentf9be23d6da035241b7687b25e64401171986dcef (diff)
downloadlinux-stable-2bb2ba9d51a8044a71a29608d2c4ef8f5b2d57a2.tar.gz
linux-stable-2bb2ba9d51a8044a71a29608d2c4ef8f5b2d57a2.tar.bz2
linux-stable-2bb2ba9d51a8044a71a29608d2c4ef8f5b2d57a2.zip
rescounters: add res_counter_uncharge_until()
When killing a res_counter which is a child of other counter, we need to do res_counter_uncharge(child, xxx) res_counter_charge(parent, xxx) This is not atomic and wastes CPU. This patch adds res_counter_uncharge_until(). This function's uncharge propagates to ancestors until specified res_counter. res_counter_uncharge_until(child, parent, xxx) Now the operation is atomic and efficient. Signed-off-by: Frederic Weisbecker <fweisbec@redhat.com> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Ying Han <yinghan@google.com> Cc: Glauber Costa <glommer@parallels.com> Reviewed-by: Tejun Heo <tj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'Documentation/cgroups')
-rw-r--r--Documentation/cgroups/resource_counter.txt8
1 files changed, 8 insertions, 0 deletions
diff --git a/Documentation/cgroups/resource_counter.txt b/Documentation/cgroups/resource_counter.txt
index f3c4ec3626a2..0c4a344e78fa 100644
--- a/Documentation/cgroups/resource_counter.txt
+++ b/Documentation/cgroups/resource_counter.txt
@@ -92,6 +92,14 @@ to work with it.
The _locked routines imply that the res_counter->lock is taken.
+ f. void res_counter_uncharge_until
+ (struct res_counter *rc, struct res_counter *top,
+ unsinged long val)
+
+ Almost same as res_cunter_uncharge() but propagation of uncharge
+ stops when rc == top. This is useful when kill a res_coutner in
+ child cgroup.
+
2.1 Other accounting routines
There are more routines that may help you with common needs, like