diff options
author | Abel Wu <wuyun.abel@bytedance.com> | 2023-10-19 20:00:26 +0800 |
---|---|---|
committer | Paolo Abeni <pabeni@redhat.com> | 2023-10-24 10:38:30 +0200 |
commit | 66e6369e312d161708786123fb44ecd53ff32d82 (patch) | |
tree | 798153fb8fd8fa1295eb84078d1223ef44b887f4 /net/core | |
parent | 2e12072c67b5f65fc71a569985a1262531fbdc06 (diff) | |
download | linux-66e6369e312d161708786123fb44ecd53ff32d82.tar.gz linux-66e6369e312d161708786123fb44ecd53ff32d82.tar.bz2 linux-66e6369e312d161708786123fb44ecd53ff32d82.zip |
sock: Ignore memcg pressure heuristics when raising allocated
Before sockets became aware of net-memcg's memory pressure since
commit e1aab161e013 ("socket: initial cgroup code."), the memory
usage would be granted to raise if below average even when under
protocol's pressure. This provides fairness among the sockets of
same protocol.
That commit changes this because the heuristic will also be
effective when only memcg is under pressure which makes no sense.
So revert that behavior.
After reverting, __sk_mem_raise_allocated() no longer considers
memcg's pressure. As memcgs are isolated from each other w.r.t.
memory accounting, consuming one's budget won't affect others.
So except the places where buffer sizes are needed to be tuned,
allow workloads to use the memory they are provisioned.
Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
Acked-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Paolo Abeni <pabeni@redhat.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20231019120026.42215-3-wuyun.abel@bytedance.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Diffstat (limited to 'net/core')
-rw-r--r-- | net/core/sock.c | 14 |
1 files changed, 12 insertions, 2 deletions
diff --git a/net/core/sock.c b/net/core/sock.c index 9f969e3c2ddf..1d28e3e87970 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -3035,7 +3035,13 @@ EXPORT_SYMBOL(sk_wait_data); * @amt: pages to allocate * @kind: allocation type * - * Similar to __sk_mem_schedule(), but does not update sk_forward_alloc + * Similar to __sk_mem_schedule(), but does not update sk_forward_alloc. + * + * Unlike the globally shared limits among the sockets under same protocol, + * consuming the budget of a memcg won't have direct effect on other ones. + * So be optimistic about memcg's tolerance, and leave the callers to decide + * whether or not to raise allocated through sk_under_memory_pressure() or + * its variants. */ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) { @@ -3093,7 +3099,11 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) if (sk_has_memory_pressure(sk)) { u64 alloc; - if (!sk_under_memory_pressure(sk)) + /* The following 'average' heuristic is within the + * scope of global accounting, so it only makes + * sense for global memory pressure. + */ + if (!sk_under_global_memory_pressure(sk)) return 1; /* Try to be fair among all the sockets under global |