diff options
author | NeilBrown <neilb@suse.de> | 2019-09-20 16:36:45 +1000 |
---|---|---|
committer | J. Bruce Fields <bfields@redhat.com> | 2019-09-20 12:31:51 -0400 |
commit | 2030ca560c5f24138c1f0f8c8a89f1ac3b560613 (patch) | |
tree | e2a65bd95be1d38a8255e174221714a6c3a1a051 | |
parent | 7f49fd5d7acd82163685559045d04d49433581cc (diff) | |
download | linux-stable-2030ca560c5f24138c1f0f8c8a89f1ac3b560613.tar.gz linux-stable-2030ca560c5f24138c1f0f8c8a89f1ac3b560613.tar.bz2 linux-stable-2030ca560c5f24138c1f0f8c8a89f1ac3b560613.zip |
nfsd: degraded slot-count more gracefully as allocation nears exhaustion.
This original code in nfsd4_get_drc_mem() would hand out 30
slots (approximately NFSD_MAX_MEM_PER_SESSION bytes at slightly
over 2K per slot) to each requesting client until it ran out
of space, then it would possibly give one last client a reduced
allocation, then fail the allocation.
Since commit de766e570413 ("nfsd: give out fewer session slots as
limit approaches") the last 90 slots to be given to about 12
clients with quickly reducing slot counts (better than just 3
clients). This still seems unnecessarily hasty.
A subsequent patch allows over-allocation so every client gets
at least one slot, but that might be a bit restrictive.
The requested number of nfsd threads is the best guide we have to the
expected number of clients, so use that - if it is at least 8.
256 threads on a 256Meg machine - which is a lot for a tiny machine -
would result in nfsd_drc_max_mem being 2Meg, so 8K (3 slots) would be
available for the first client, and over 200 clients would get more
than 1 slot. So I don't think this change will be too debilitating on
poorly configured machines, though it does mean that a sensible
configuration is a little more important.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
-rw-r--r-- | fs/nfsd/nfs4state.c | 15 |
1 files changed, 11 insertions, 4 deletions
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c index 8622d6dd45c0..c65aeaa812d4 100644 --- a/fs/nfsd/nfs4state.c +++ b/fs/nfsd/nfs4state.c @@ -1568,11 +1568,12 @@ static inline u32 slot_bytes(struct nfsd4_channel_attrs *ca) * re-negotiate active sessions and reduce their slot usage to make * room for new connections. For now we just fail the create session. */ -static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca) +static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca, struct nfsd_net *nn) { u32 slotsize = slot_bytes(ca); u32 num = ca->maxreqs; unsigned long avail, total_avail; + unsigned int scale_factor; spin_lock(&nfsd_drc_lock); if (nfsd_drc_max_mem > nfsd_drc_mem_used) @@ -1586,12 +1587,18 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca) total_avail = 0; avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); /* - * Never use more than a third of the remaining memory, + * Never use more than a fraction of the remaining memory, * unless it's the only way to give this client a slot. + * The chosen fraction is either 1/8 or 1/number of threads, + * whichever is smaller. This ensures there are adequate + * slots to support multiple clients per thread. * Give the client one slot even if that would require * over-allocation--it is better than failure. */ - avail = clamp_t(unsigned long, avail, slotsize, total_avail/3); + scale_factor = max_t(unsigned int, 8, nn->nfsd_serv->sv_nrthreads); + + avail = clamp_t(unsigned long, avail, slotsize, + total_avail/scale_factor); num = min_t(int, num, avail / slotsize); num = max_t(int, num, 1); nfsd_drc_mem_used += num * slotsize; @@ -3188,7 +3195,7 @@ static __be32 check_forechannel_attrs(struct nfsd4_channel_attrs *ca, struct nfs * Note that we always allow at least one slot, because our * accounting is soft and provides no guarantees either way. */ - ca->maxreqs = nfsd4_get_drc_mem(ca); + ca->maxreqs = nfsd4_get_drc_mem(ca, nn); return nfs_ok; } |