diff options
author | J. Bruce Fields <bfields@redhat.com> | 2017-11-15 12:30:27 -0500 |
---|---|---|
committer | J. Bruce Fields <bfields@redhat.com> | 2018-02-08 13:40:16 -0500 |
commit | 0078117c6d9160031b866cfa1853514d4f6865d2 (patch) | |
tree | 284b86e03244fca4e73f872fb32c42e08987590c /fs/nfsd/nfs4proc.c | |
parent | 2502072058b35e2297f4ad7b211a45ad95a6a3d5 (diff) | |
download | linux-stable-0078117c6d9160031b866cfa1853514d4f6865d2.tar.gz linux-stable-0078117c6d9160031b866cfa1853514d4f6865d2.tar.bz2 linux-stable-0078117c6d9160031b866cfa1853514d4f6865d2.zip |
nfsd: return RESOURCE not GARBAGE_ARGS on too many ops
A client that sends more than a hundred ops in a single compound
currently gets an rpc-level GARBAGE_ARGS error.
It would be more helpful to return NFS4ERR_RESOURCE, since that gives
the client a better idea how to recover (for example by splitting up the
compound into smaller compounds).
This is all a bit academic since we've never actually seen a reason for
clients to send such long compounds, but we may as well fix it.
While we're there, just use NFSD4_MAX_OPS_PER_COMPOUND == 16, the
constant we already use in the 4.1 case, instead of hard-coding 100.
Chances anyone actually uses even 16 ops per compound are small enough
that I think there's a neglible risk or any regression.
This fixes pynfs test COMP6.
Reported-by: "Lu, Xinyu" <luxy.fnst@cn.fujitsu.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Diffstat (limited to 'fs/nfsd/nfs4proc.c')
-rw-r--r-- | fs/nfsd/nfs4proc.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c index effeeb4f556f..a0bed2b2004d 100644 --- a/fs/nfsd/nfs4proc.c +++ b/fs/nfsd/nfs4proc.c @@ -1703,6 +1703,9 @@ nfsd4_proc_compound(struct svc_rqst *rqstp) status = nfserr_minor_vers_mismatch; if (nfsd_minorversion(args->minorversion, NFSD_TEST) <= 0) goto out; + status = nfserr_resource; + if (args->opcnt > NFSD_MAX_OPS_PER_COMPOUND) + goto out; status = nfs41_check_op_ordering(args); if (status) { |