summaryrefslogtreecommitdiffstats
path: root/fs/nfsd
Commit message (Collapse)AuthorAgeFilesLines
* ima: pass 'opened' flag to identify newly created filesDmitry Kasatkin2014-09-091-1/+1
| | | | | | | | | | Empty files and missing xattrs do not guarantee that a file was just created. This patch passes FILE_CREATED flag to IMA to reliably identify new files. Signed-off-by: Dmitry Kasatkin <d.kasatkin@samsung.com> Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Cc: <stable@vger.kernel.org> 3.14+
* nfsd: Fix bad reserving space for encoding rdattr_errorKinglong Mee2014-07-071-1/+1
| | | | | | | Introduced by commit 561f0ed498 (nfsd4: allow large readdirs). Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfs: fix nfs4d readlink truncated packetAvi Kivity2014-07-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | XDR requires 4-byte alignment; nfs4d READLINK reply writes out the padding, but truncates the packet to the padding-less size. Fix by taking the padding into consideration when truncating the packet. Symptoms: # ll /mnt/ ls: cannot read symbolic link /mnt/test: Input/output error total 4 -rw-r--r--. 1 root root 0 Jun 14 01:21 123456 lrwxrwxrwx. 1 root root 6 Jul 2 03:33 test drwxr-xr-x. 1 root root 0 Jul 2 23:50 tmp drwxr-xr-x. 1 root root 60 Jul 2 23:44 tree Signed-off-by: Avi Kivity <avi@cloudius-systems.com> Fixes: 476a7b1f4b2c (nfsd4: don't treat readlink like a zero-copy operation) Reviewed-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd: fix rare symlink decoding bugJ. Bruce Fields2014-06-272-10/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An NFS operation that creates a new symlink includes the symlink data, which is xdr-encoded as a length followed by the data plus 0 to 3 bytes of zero-padding as required to reach a 4-byte boundary. The vfs, on the other hand, wants null-terminated data. The simple way to handle this would be by copying the data into a newly allocated buffer with space for the final null. The current nfsd_symlink code tries to be more clever by skipping that step in the (likely) case where the byte following the string is already 0. But that assumes that the byte following the string is ours to look at. In fact, it might be the first byte of a page that we can't read, or of some object that another task might modify. Worse, the NFSv4 code tries to fix the problem by actually writing to that byte. In the NFSv2/v3 cases this actually appears to be safe: - nfs3svc_decode_symlinkargs explicitly null-terminates the data (after first checking its length and copying it to a new page). - NFSv2 limits symlinks to 1k. The buffer holding the rpc request is always at least a page, and the link data (and previous fields) have maximum lengths that prevent the request from reaching the end of a page. In the NFSv4 case the CREATE op is potentially just one part of a long compound so can end up on the end of a page if you're unlucky. The minimal fix here is to copy and null-terminate in the NFSv4 case. The nfsd_symlink() interface here seems too fragile, though. It should really either do the copy itself every time or just require a null-terminated string. Reported-by: Jeff Layton <jlayton@primarydata.com> Cc: stable@vger.kernel.org Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: fix bug for readdir of pseudofsKinglong Mee2014-06-171-0/+1
| | | | | | | | | | Commit 561f0ed498ca (nfsd4: allow large readdirs) introduces a bug about readdir the root of pseudofs. Call xdr_truncate_encode() revert encoded name when skipping. Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: Don't hand out delegations for 30 seconds after recalling them.NeilBrown2014-06-171-0/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If nfsd needs to recall a delegation for some reason it implies that there is contention on the file, so further delegations should not be handed out. The current code fails to do so, and the result is effectively a live-lock under some workloads: a client attempting a conflicting operation on a read-delegated file receives NFS4ERR_DELAY and retries the operation, but by the time it retries the server may already have given out another delegation. We could simply avoid delegations for (say) 30 seconds after any recall, but this is probably too heavy handed. We could keep a list of inodes (or inode numbers or filehandles) for recalled delegations, but that requires memory allocation and searching. The approach taken here is to use a bloom filter to record the filehandles which are currently blocked from delegation, and to accept the cost of a few false positives. We have 2 bloom filters, each of which is valid for 30 seconds. When a delegation is recalled the filehandle is added to one filter and will remain disabled for between 30 and 60 seconds. We keep a count of the number of filehandles that have been added, so when that count is zero we can bypass all other tests. The bloom filters have 256 bits and 3 hash functions. This should allow a couple of dozen blocked filehandles with minimal false positives. If many more filehandles are all blocked at once, behaviour will degrade towards rejecting all delegations for between 30 and 60 seconds, then resetting and allowing new delegations. Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: fix FREE_STATEID lockowner leakJ. Bruce Fields2014-06-091-1/+1
| | | | | | | | | 27b11428b7de ("nfsd4: remove lockowner when removing lock stateid") introduced a memory leak. Cc: stable@vger.kernel.org Reported-by: Jeff Layton <jeff.layton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd: don't halt scanning the DRC LRU list when there's an RC_INPROG entryJeff Layton2014-06-061-9/+8
| | | | | | | | | | | | | Currently, the DRC cache pruner will stop scanning the list when it hits an entry that is RC_INPROG. It's possible however for a call to take a *very* long time. In that case, we don't want it to block other entries from being pruned if they are expired or we need to trim the cache to get back under the limit. Fix the DRC cache pruner to just ignore RC_INPROG entries. Signed-off-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: kill READ64J. Bruce Fields2014-06-061-17/+16
| | | | Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: kill READ32J. Bruce Fields2014-06-061-128/+127
| | | | | | | | While we're here, let's kill off a couple of the read-side macros. Leaving the more complicated ones alone for now. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: simplify server xdr->next_page useJ. Bruce Fields2014-06-061-1/+1
| | | | | | | | | | | | | | | | | | | | The rpc code makes available to the NFS server an array of pages to encod into. The server represents its reply as an xdr buf, with the head pointing into the first page in that array, the pages ** array starting just after that, and the tail (if any) sharing any leftover space in the page used by the head. While encoding, we use xdr_stream->page_ptr to keep track of which page we're currently using. Currently we set xdr_stream->page_ptr to buf->pages, which makes the head a weird exception to the rule that page_ptr always points to the page we're currently encoding into. So, instead set it to buf->pages - 1 (the page actually containing the head), and remove the need for a little unintuitive logic in xdr_get_next_encode_buffer() and xdr_truncate_encode. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: hash deleg stateid only on successful nfs4_set_delegationBenny Halevy2014-06-041-1/+1
| | | | | | | | | | | We don't want the stateid to be found in the hash table before the delegation is granted. Currently this is protected by the client_mutex, but we want to break that up and this is a necessary step toward that goal. Signed-off-by: Benny Halevy <bhalevy@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: rename recall_lock to state_lockBenny Halevy2014-06-041-31/+32
| | | | | | | | ...as the name is a bit more descriptive and we've started using it for other purposes. Signed-off-by: Benny Halevy <bhalevy@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd: remove unneeded zeroing of fields in nfsd4_proc_compoundJeff Layton2014-06-041-3/+0
| | | | | | | | The memset of resp in svc_process_common should ensure that these are already zeroed by the time they get here. Signed-off-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd: fix setting of NFS4_OO_CONFIRMED in nfsd4_openJeff Layton2014-06-041-2/+1
| | | | | | | | | | | | | | In the NFS4_OPEN_CLAIM_PREVIOUS case, we should only mark it confirmed if the nfs4_check_open_reclaim check succeeds. In the NFS4_OPEN_CLAIM_DELEG_PREV_FH and NFS4_OPEN_CLAIM_DELEGATE_PREV cases, I see no point in declaring the openowner confirmed when the operation is going to fail anyway, and doing so might allow the client to game things such that it wouldn't need to confirm a subsequent open with the same owner. Signed-off-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: use recall_lock for delegation hashingBenny Halevy2014-06-041-5/+15
| | | | | | | | | | | | | | This fixes a bug in the handling of the fi_delegations list. nfs4_setlease does not hold the recall_lock when adding to it. The client_mutex is held, which prevents against concurrent list changes, but nfsd_break_deleg_cb does not hold while walking it. New delegations could theoretically creep onto the list while we're walking it there. Signed-off-by: Benny Halevy <bhalevy@primarydata.com> Signed-off-by: Jeff Layton <jlayton@primarydata.com> Cc: stable@vger.kernel.org Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd: fix laundromat next-run-time calculationJeff Layton2014-05-301-14/+8
| | | | | | | | | The laundromat uses two variables to calculate when it should next run, but one is completely ignored at the end of the run. Merge the two and rename the variable to be more descriptive of what it does. Signed-off-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd: make nfsd4_encode_fattr staticJeff Layton2014-05-301-1/+1
| | | | | | | | | | sparse says: CHECK fs/nfsd/nfs4xdr.c fs/nfsd/nfs4xdr.c:2043:1: warning: symbol 'nfsd4_encode_fattr' was not declared. Should it be static? Signed-off-by: Jeff Layton <jlayton@primarydata.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* SUNRPC/NFSD: Remove using of dprintk with KERN_WARNINGKinglong Mee2014-05-301-3/+2
| | | | | | | | | | | | | | | When debugging, rpc prints messages from dprintk(KERN_WARNING ...) with "^A4" prefixed, [ 2780.339988] ^A4nfsd: connect from unprivileged port: 127.0.0.1, port=35316 Trond tells, > dprintk != printk. We have NEVER supported dprintk(KERN_WARNING...) This patch removes using of dprintk with KERN_WARNING. Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd: remove unused function nfsd_read_fileChristoph Hellwig2014-05-302-22/+0
| | | | | Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd: getattr for FATTR4_WORD0_FILES_AVAIL needs the statfs bufferChristoph Hellwig2014-05-301-2/+2
| | | | | | | | | Note nobody's ever noticed because the typical client probably never requests FILES_AVAIL without also requesting something else on the list. Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: stable@vger.kernel.org Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: Error out when getting more than one fsloc/secinfo/uuidKinglong Mee2014-05-301-0/+12
| | | | | Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: Using type of uint32_t for ex_nflavors instead of intKinglong Mee2014-05-302-4/+5
| | | | | | | ex_nflavors can't be negative number, just defined by uint32_t. Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: Add missing comment of "expiry" in expkey_parse()Kinglong Mee2014-05-301-1/+1
| | | | | Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: Remove typedef of svc_client and svc_export in export.cKinglong Mee2014-05-301-11/+8
| | | | | | | No need for a typedef wrapper for svc_export or svc_client, remove them. Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: Cleanup unneeded including net/ipv6.hKinglong Mee2014-05-301-2/+0
| | | | | | | | | Commit 49b28684fdba ("nfsd: Remove deprecated nfsctl system call and related code") removed the only use of ipv6_addr_set_v4mapped(), so net/ipv6.h is unneeded now. Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: Cleanup unused variable in nfsd_setuser()Kinglong Mee2014-05-301-3/+1
| | | | | | | | Commit 8f6c5ffc8987 ("kernel/groups.c: remove return value of set_groups") removed the last use of "ret". Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: remove unneeded linux/user_namespace.h includeKinglong Mee2014-05-301-1/+0
| | | | | | | | | After commit 4c1e1b34d5c8 ("nfsd: Store ex_anon_uid and ex_anon_gid as kuids and kgids") using kuid/kgid for ex_anon_uid/ex_anon_gid, user_namespace.h is not needed. Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: Adds macro EX_UUID_LEN for exports uuid's lengthKinglong Mee2014-05-303-4/+6
| | | | | Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFSD: Helper function for parsing uuidKinglong Mee2014-05-301-12/+20
| | | | | Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* NFS4: Avoid NULL reference or double free in nfsd4_fslocs_free()Kinglong Mee2014-05-301-3/+9
| | | | | | | | | | | | | | | | | | | | If fsloc_parse() failed at kzalloc(), fs/nfsd/export.c 411 412 fsloc->locations = kzalloc(fsloc->locations_count 413 * sizeof(struct nfsd4_fs_location), GFP_KERNEL); 414 if (!fsloc->locations) 415 return -ENOMEM; svc_export_parse() will call nfsd4_fslocs_free() with fsloc->locations = NULL, so that, "kfree(fsloc->locations[i].path);" will cause a crash. If fsloc_parse() failed after that, fsloc_parse() will call nfsd4_fslocs_free(), and svc_export_parse() will call it again, so that, a double free is caused. This patch checks the fsloc->locations, and set to NULL after it be freed. Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: better reservation of head space for krb5J. Bruce Fields2014-05-303-5/+6
| | | | | | | | | RPC_MAX_AUTH_SIZE is scattered around several places. Better to set it once in the auth code, where this kind of estimate should be made. And while we're at it we can leave it zero when we're not using krb5i or krb5p. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: kill write32, write64J. Bruce Fields2014-05-301-30/+21
| | | | | | | And switch a couple other functions from the encode(&p,...) convention to the p = encode(p,...) convention mostly used elsewhere. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: kill WRITEMEMJ. Bruce Fields2014-05-301-34/+28
| | | | Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: kill WRITE64J. Bruce Fields2014-05-301-28/+24
| | | | Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: kill WRITE32J. Bruce Fields2014-05-301-152/+154
| | | | | | | These macros just obscure what's going on. Adopt the convention of the client-side code. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: really fix nfs4err_resource in 4.1 caseJ. Bruce Fields2014-05-301-0/+8
| | | | | | | | | | | | | | | | | encode_getattr, for example, can return nfserr_resource to indicate it ran out of buffer space. That's not a legal error in the 4.1 case. And in the 4.1 case, if we ran out of buffer space, we should have exceeded a session limit too. (Note in 1bc49d83c37cfaf46be357757e592711e67f9809 "nfsd4: fix nfs4err_resource in 4.1 case" we originally tried fixing this error return before fixing the problem that we could error out while we still had lots of available space. The result was to trade one illegal error for another in those cases. We decided that was helpful, so reverted the change in fc208d026be0c7d60db9118583fc62f6ca97743d, and are only reinstating it now that we've elimited almost all of those cases.) Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: allow exotic read compoundsJ. Bruce Fields2014-05-301-36/+25
| | | | | | | | I'm not sure why a client would want to stuff multiple reads in a single compound rpc, but it's legal for them to do it, and we should really support it. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: more read encoding cleanupJ. Bruce Fields2014-05-301-12/+10
| | | | | | More cleanup, no change in functionality. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: read encoding cleanupJ. Bruce Fields2014-05-301-13/+14
| | | | | | Trivial cleanup, no change in functionality. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: separate splice and readv casesJ. Bruce Fields2014-05-303-88/+214
| | | | | | | | | | | | The splice and readv cases are actually quite different--for example the former case ignores the array of vectors we build up for the latter. It is probably clearer to separate the two cases entirely. There's some code duplication between the split out encoders, but this is only temporary and will be fixed by a later patch. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: nfsd_vfs_read doesn't use file handle parameterJ. Bruce Fields2014-05-301-3/+3
| | | | Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: turn off zero-copy-read in exotic casesJ. Bruce Fields2014-05-301-4/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We currently allow only one read per compound, with operations before and after whose responses will require no more than about a page to encode. While we don't expect clients to violate those limits any time soon, this limitation isn't really condoned by the spec, so to future proof the server we should lift the limitation. At the same time we'd like to continue to support zero-copy reads. Supporting multiple zero-copy-reads per compound would require a new data structure to replace struct xdr_buf, which can represent only one set of included pages. So for now we plan to modify encode_read() to support either zero-copy or non-zero-copy reads, and use some heuristics at the start of the compound processing to decide whether a zero-copy read will work. This will allow us to support more exotic compounds without introducing a performance regression in the normal case. Later patches handle those "exotic compounds", this one just makes sure zero-copy is turned off in those cases. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: estimate sequence response sizeJ. Bruce Fields2014-05-301-0/+6
| | | | | | Otherwise a following patch would turn off all 4.1 zero-copy reads. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: better estimate of getattr response sizeJ. Bruce Fields2014-05-301-0/+44
| | | | | | | | | | We plan to use this estimate to decide whether or not to allow zero-copy reads. Currently we're assuming all getattr's are a page, which can be both too small (ACLs e.g. may be arbitrarily long) and too large (after an upcoming read patch this will unnecessarily prevent zero copy reads in any read compound also containing a getattr). Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: don't treat readlink like a zero-copy operationJ. Bruce Fields2014-05-301-30/+12
| | | | | | | | | There's no advantage to this zero-copy-style readlink encoding, and it unnecessarily limits the kinds of compounds we can handle. (In practice I can't see why a client would want e.g. multiple readlink calls in a comound, but it's probably a spec violation for us not to handle it.) Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: enforce rd_dircountJ. Bruce Fields2014-05-301-1/+4
| | | | | | | | | | As long as we're here, let's enforce the protocol's limit on the number of directory entries to return in a readdir. I don't think anyone's ever noticed our lack of enforcement, but maybe there's more of a chance they will now that we allow larger readdirs. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: allow large readdirsJ. Bruce Fields2014-05-303-69/+82
| | | | | | | | Currently we limit readdir results to a single page. This can result in a performance regression compared to NFSv3 when reading large directories. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: use session limits to release send buffer reservationJ. Bruce Fields2014-05-301-0/+1
| | | | | | | | Once we know the limits the session places on the size of the rpc, we can also use that information to release any unnecessary reserved reply buffer space. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* nfsd4: adjust buflen to session channel limitJ. Bruce Fields2014-05-302-16/+19
| | | | | | | | | | | | We can simplify session limit enforcement by restricting the xdr buflen to the session size. Also fix a preexisting bug: we should really have been taking into account the auth-required space when comparing against session limits, which are limits on the size of the entire rpc reply, including any krb5 overhead. Signed-off-by: J. Bruce Fields <bfields@redhat.com>