summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* NFS: Ensure that we always drop inodes that have been marked as staleTrond Myklebust2012-12-144-0/+9
| | | | | | There is no need to cache stale inodes. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* nfs: Remove unused list nfs4_clientid_listYanchuan Nian2012-12-131-1/+0
| | | | | | | | | This list was designed to store struct nfs4_client in the client side. But nfs4_client was obsolete and has been removed from the source code. So remove the unused list. Signed-off-by: Yanchuan Nian <ycnian@gmail.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* nfs: Remove duplicate function declaration in internal.hYanchuan Nian2012-12-131-6/+0
| | | | | | | | Remove duplicate function declaration in internal.h Signed-off-by: Yanchuan Nian <ycnian@gmail.com> [Trond: Added nfs_pageio_init_read, which suffered from the same problem] Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFS: avoid NULL dereference in nfs_destroy_serverNeilBrown2012-12-121-2/+1
| | | | | | | | | | | | | | | | | | | | | | In rare circumstances, nfs_clone_server() of a v2 or v3 server can get an error between setting server->destory (to nfs_destroy_server), and calling nfs_start_lockd (which will set server->nlm_host). If this happens, nfs_clone_server will call nfs_free_server which will call nfs_destroy_server and thence nlmclnt_done(NULL). This causes the NULL to be dereferenced. So add a guard to only call nlmclnt_done() if ->nlm_host is not NULL. The other guards there are irrelevant as nlm_host can only be non-NULL if one of these flags are set - so remove those tests. (Thanks to Trond for this suggestion). This is suitable for any stable kernel since 2.6.25. Cc: stable@vger.kernel.org Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* SUNRPC handle EKEYEXPIRED in call_refreshresultAndy Adamson2012-12-125-88/+3
| | | | | | | | | | | | | | | | | | Currently, when an RPCSEC_GSS context has expired or is non-existent and the users (Kerberos) credentials have also expired or are non-existent, the client receives the -EKEYEXPIRED error and tries to refresh the context forever. If an application is performing I/O, or other work against the share, the application hangs, and the user is not prompted to refresh/establish their credentials. This can result in a denial of service for other users. Users are expected to manage their Kerberos credential lifetimes to mitigate this issue. Move the -EKEYEXPIRED handling into the RPC layer. Try tk_cred_retry number of times to refresh the gss_context, and then return -EACCES to the application. Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* nfs: fix page dirtying in NFS DIO read codepathJeff Layton2012-12-121-7/+2
| | | | | | | | | | | | | | | | | The NFS DIO code will dirty pages that catch read responses in order to handle the case where someone is doing DIO reads into an mmapped buffer. The existing code doesn't really do the right thing though since it doesn't take into account the case where we might be attempting to read past the EOF. Fix the logic in that code to only dirty pages that ended up receiving data from the read. Note too that it really doesn't matter if NFS_IOHDR_ERROR is set or not. All that matters is if the page was altered by the read. Cc: Fred Isaman <iisaman@netapp.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* nfs: don't zero out the rest of the page if we hit the EOF on a DIO READJeff Layton2012-12-121-8/+0
| | | | | | | | | | | | | | | | | | Eryu provided a test program that would segfault when attempting to read past the EOF on file that was opened O_DIRECT. The buffer given to the read() call was on the stack, and when he attempted to read past it it would scribble over the rest of the stack page. If we hit the end of the file on a DIO READ request, then we don't want to zero out the rest of the buffer. These aren't pagecache pages after all, and there's no guarantee that the buffers that were passed in represent entire pages. Cc: <stable@vger.kernel.org> # v3.5+ Cc: Fred Isaman <iisaman@netapp.com> Reported-by: Eryu Guan <eguan@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFSv4.1: Be conservative about the client highest slotidTrond Myklebust2012-12-111-6/+16
| | | | | | | | | If the server sends us a target that looks like an outlier, but is lower than the existing target, then respect it anyway. However defer actually updating the generation counter until we get a target that doesn't look like an outlier. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* NFSv4.1: Handle NFS4ERR_BADSLOT errors correctlyTrond Myklebust2012-12-111-1/+12
| | | | | | | | | | Most (all) NFS4ERR_BADSLOT errors are due to the client failing to respect the server's sr_highest_slotid limit. This mainly happens due to reordered RPC requests. The way to handle it is simply to drop the slot that we're using, and retry using the new highest_slotid limits. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* Merge branch 'bugfixes' into nfs-for-nextTrond Myklebust2012-12-1110-45/+57
|\
| * nfs: don't extend writes to cover entire page if pagecache is invalidJeff Layton2012-12-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jian reported that the following sequence would leave "testfile" with corrupt data: # mount localhost:/export /mnt/nfs/ -o vers=3 # echo abc > /mnt/nfs/testfile; echo def >> /export/testfile; echo ghi >> /mnt/nfs/testfile # cat -v /export/testfile abc ^@^@^@^@ghi While there's no locking involved here, the operations are serialized, so CTO should prevent corruption. The first write to the file is fine and writes 4 bytes. The file is then extended on the server. When it's reopened a GETATTR is issued and the size change is noticed. This causes NFS_INO_INVALID_DATA to be set on the file. Because the file is opened for write only, nfs_want_read_modify_write() returns 0 to nfs_write_begin(). nfs_updatepage then calls nfs_write_pageuptodate() to see if it should extend the nfs_page to cover the whole page. NFS_INO_INVALID_DATA is still set on the file at that point, but that flag is ignored and nfs_pageuptodate erroneously extends the write to cover the whole page, with the write done on the server side filled in with zeroes. This patch just has that function check for NFS_INO_INVALID_DATA in addition to NFS_INO_REVAL_PAGECACHE. This fixes the bug, but looking over the code, I wonder if we might have a similar bug in nfs_revalidate_size(). The difference between those two flags is very subtle, so it seems like we ought to be checking for NFS_INO_INVALID_DATA in most of the places that we look for NFS_INO_REVAL_PAGECACHE. I believe this is regression introduced by commit 8d197a568. The code did check for NFS_INO_INVALID_DATA prior to that patch. Original bug report is here: https://bugzilla.redhat.com/show_bug.cgi?id=885743 Cc: <stable@vger.kernel.org> # 3.5+ Reported-by: Jian Li <jiali@redhat.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * NFSv4: Check for buffer length in __nfs4_get_acl_uncachedSven Wegener2012-12-111-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 1f1ea6c "NFSv4: Fix buffer overflow checking in __nfs4_get_acl_uncached" accidently dropped the checking for too small result buffer length. If someone uses getxattr on "system.nfs4_acl" on an NFSv4 mount supporting ACLs, the ACL has not been cached and the buffer suplied is too short, we still copy the complete ACL, resulting in kernel and user space memory corruption. Signed-off-by: Sven Wegener <sven.wegener@stealer.net> Cc: stable@kernel.org Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * nfs: Fix wrong slab cache in nfs_commit_mempoolYanchuan Nian2012-11-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | The slab cache in nfs_commit_mempool is wrong, and I think it is just a slip. I tested it on a x86-32 machine, the size of nfs_write_header is 544, and the size of nfs_commit_data is 408, so it works fine. It is also true that sizeof(struct nfs_write_header) > sizeof(struct nfs_commit_data) on other platforms in my opinoin. Just fix it. Signed-off-by: Yanchuan Nian <ycnian@gmail.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * NFS: Reduce stack use in encode_exchange_id()Jim Rees2012-11-211-3/+5
| | | | | | | | | | | | | | | | | | encode_exchange_id() uses more stack space than necessary, giving a compile time warning. Reduce the size of the static buffer for implementation name. Signed-off-by: Jim Rees <rees@umich.edu> Reviewed-by: "Adamson, Dros" <Weston.Adamson@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * NFSv4: Fix a compile time warning when #undef CONFIG_NFS_V4_1Trond Myklebust2012-11-211-1/+1
| | | | | | | | | | | | | | The function nfs4_get_machine_cred_locked is used by NFSv4.0 routines too. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * Merge git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-fixesLinus Torvalds2012-11-076-38/+43
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull gfs2 fixes from Steven Whitehouse: "Here are a number of GFS2 bug fixes. There are three from Andy Price which fix various issues spotted by automated code analysis. There are two from Lukas Czerner fixing my mistaken assumptions as to how FITRIM should work. Finally Ben Marzinski has fixed a bug relating to mmap and atime and also a bug relating to a locking issue in the transaction code." * git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-3.0-fixes: GFS2: Test bufdata with buffer locked and gfs2_log_lock held GFS2: Don't call file_accessed() with a shared glock GFS2: Fix FITRIM argument handling GFS2: Require user to provide argument for FITRIM GFS2: Clean up some unused assignments GFS2: Fix possible null pointer deref in gfs2_rs_alloc GFS2: Fix an unchecked error from gfs2_rs_alloc
| | * GFS2: Test bufdata with buffer locked and gfs2_log_lock heldBenjamin Marzinski2012-11-072-12/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In gfs2_trans_add_bh(), gfs2 was testing if a there was a bd attached to the buffer without having the gfs2_log_lock held. It was then assuming it would stay attached for the rest of the function. However, without either the log lock being held of the buffer locked, __gfs2_ail_flush() could detach bd at any time. This patch moves the locking before the test. If there isn't a bd already attached, gfs2 can safely allocate one and attach it before locking. There is no way that the newly allocated bd could be on the ail list, and thus no way for __gfs2_ail_flush() to detach it. Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| | * GFS2: Don't call file_accessed() with a shared glockBenjamin Marzinski2012-11-072-8/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | file_accessed() was being called by gfs2_mmap() with a shared glock. If it needed to update the atime, it was crashing because it dirtied the inode in gfs2_dirty_inode() without holding an exclusive lock. gfs2_dirty_inode() checked if the caller was already holding a glock, but it didn't make sure that the glock was in the exclusive state. Now, instead of calling file_accessed() while holding the shared lock in gfs2_mmap(), file_accessed() is called after grabbing and releasing the glock to update the inode. If file_accessed() needs to update the atime, it will grab an exclusive lock in gfs2_dirty_inode(). gfs2_dirty_inode() now also checks to make sure that if the calling process has already locked the glock, it has an exclusive lock. Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| | * GFS2: Fix FITRIM argument handlingLukas Czerner2012-11-071-3/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently implementation in gfs2 uses FITRIM arguments as it were in file system blocks units which is wrong. The FITRIM arguments (fstrim_range.start, fstrim_range.len and fstrim_range.minlen) are actually in bytes. Moreover, check for start argument beyond the end of file system, len argument being smaller than file system block and minlen argument being bigger than biggest resource group were missing. This commit converts the code to convert FITRIM argument to file system blocks and also adds appropriate checks mentioned above. All the problems were recognised by xfstests 251 and 260. Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| | * GFS2: Require user to provide argument for FITRIMLukas Czerner2012-11-071-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | When the fstrim_range argument is not provided by user in FITRIM ioctl we should just return EFAULT and not promoting bad behaviour by filling the structure in kernel. Let the user deal with it. Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| | * GFS2: Clean up some unused assignmentsAndrew Price2012-11-072-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | Cleans up two cases where variables were assigned values but then never used again. Signed-off-by: Andrew Price <anprice@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| | * GFS2: Fix possible null pointer deref in gfs2_rs_allocAndrew Price2012-11-071-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Despite the return value from kmem_cache_zalloc() being checked, the error wasn't being returned until after a possible null pointer dereference. This patch returns the error immediately, allowing the removal of the error variable. Signed-off-by: Andrew Price <anprice@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| | * GFS2: Fix an unchecked error from gfs2_rs_allocAndrew Price2012-11-071-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | Check the return value of gfs2_rs_alloc(ip) and avoid a possible null pointer dereference. Signed-off-by: Andrew Price <anprice@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
* | | NFSv4.1: Try to eliminate outliers when updating target_highest_slotidTrond Myklebust2012-12-062-5/+60
| | | | | | | | | | | | | | | | | | | | | | | | Look for sudden changes in the first and second derivatives in order to eliminate outlier changes to target_highest_slotid (which are due to out-of-order RPC replies). Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Ensure smooth handover of slots from one task to the next waitingTrond Myklebust2012-12-064-12/+69
| | | | | | | | | | | | | | | | | | | | | | | | Currently, we see a lot of bouncing for the value of highest_used_slotid due to the fact that slots are getting freed, instead of getting instantly transmitted to the next waiting task. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Don't mess with task priorities in nfs41_setup_sequenceTrond Myklebust2012-12-061-4/+4
| | | | | | | | | | | | | | | | | | | | | We want to preserve the rpc_task priority for things like writebacks, that may have differing levels of urgency. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFS: Remove _nfs_call_sync_sessionBryan Schumaker2012-12-061-11/+1
| | | | | | | | | | | | | | | | | | | | | | | | All it does is pass its arguments through to another function. Let's cut out the middleman... Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4: Clean up handling of privileged operationsTrond Myklebust2012-12-061-72/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Privileged rpc calls are those that are run by the state recovery thread, in cases where we're trying to recover the system after a server reboot or a network partition. In those cases, we want to fence off all other rpc calls (see nfs4_begin_drain_session()) so that they don't end up using stateids or clientids that are in the process of being recovered. Prior to this patch, we had to set up special callback functions in order to declare an rpc call as being privileged. By adding a new field to the sequence arguments, this patch simplifies things considerably, and allows us to declare the rpc call as privileged before it is run. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Remove the 'FIFO' behaviour for nfs41_setup_sequenceTrond Myklebust2012-12-063-18/+2
| | | | | | | | | | | | | | | | | | | | | | | | It is more important to preserve the task priority behaviour, which ensures that things like reclaim writes take precedence over background and kupdate writes. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Clean up nfs41_setup_sequenceTrond Myklebust2012-12-061-9/+7
| | | | | | | | | | | | | | | | | | Move all the sleep-and-exit cases into a single section of code. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4: Simplify the NFSv4/v4.1 synchronous call switchTrond Myklebust2012-12-063-22/+8
| | | | | | | | | | | | | | | | | | | | | We shouldn't need to pass the 'cache_reply' parameter if we initialise the sequence_args/sequence_res in the caller. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Simplify the sequence setupTrond Myklebust2012-12-063-94/+62
| | | | | | | | | | | | | | | | | | | | | | | | Nobody calls nfs4_setup_sequence or nfs41_setup_sequence without also calling rpc_call_start() on success. This commit therefore folds the rpc_call_start call into nfs41_setup_sequence(). Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Use nfs41_setup_sequence where appropriateTrond Myklebust2012-12-061-6/+9
| | | | | | | | | | | | | | | | | | | | | There is no point in using nfs4_setup_sequence or nfs4_sequence_done in pure NFSv4.1 functions. We already know that those have sessions... Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Ping server when our session table limits are too highTrond Myklebust2012-12-063-3/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | If the server requests a lower target_highest_slotid, then ensure that we ping it with at least one RPC call containing an appropriate SEQUENCE op. This ensures that the server won't need to send a recall callback in order to shrink the slot table. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Set the maximum slot table size to 1024 slotsTrond Myklebust2012-12-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This means that we end up statically allocating 128 bytes for the bitmap on each slot table. For a server that supports 1MB write and read I/O sizes this means that we can completely fill the maximum 1GB TCP send/receive windows. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Move slot table and session struct definitions to nfs4session.hTrond Myklebust2012-12-069-33/+107
| | | | | | | | | | | | | | | | | | Clean up. Gather NFSv4.1 slot definitions in fs/nfs/nfs4session.h. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Cleanup move session slot management to fs/nfs/nfs4session.cTrond Myklebust2012-12-069-427/+477
| | | | | | | | | | | | | | | | | | | | | NFSv4.1 session management is getting complex enough to deserve a separate file. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4: Move nfs4_wait_clnt_recover and nfs4_client_recover_expired_leaseTrond Myklebust2012-12-063-36/+38
| | | | | | | | | | | | | | | | | | | | | | | | nfs4_wait_clnt_recover and nfs4_client_recover_expired_lease are both generic state related functions. As such, they belong in nfs4state.c, and not nfs4proc.c Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Clean up session drainingTrond Myklebust2012-12-065-35/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | Coalesce nfs4_check_drain_bc_complete and nfs4_check_drain_fc_complete into a single function that can be called when the slot table is known to be empty, then change nfs4_callback_free_slot() and nfs4_free_slot() to use it. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: If slot allocation fails due to OOM, retry more quicklyTrond Myklebust2012-12-061-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | If the NFSv4.1 session slot allocation fails due to an ENOMEM condition, then set the task->tk_timeout to 1/4 second to ensure that we do retry the slot allocation more quickly. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: CB_RECALL_SLOT must schedule a sequence op after updating targetsTrond Myklebust2012-12-063-0/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | RFC5661 requires us to make sure that the server knows we've updated our slot table size by sending at least one SEQUENCE op containing the new 'highest_slotid' value. We can do so using the 'CHECK_LEASE' functionality of the state manager. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Remove the state manager code to resize the slot tableTrond Myklebust2012-12-064-55/+0
| | | | | | | | | | | | | | | | | | | | | | | | The state manager no longer needs any special machinery to stop the session flow and resize the slot table. It is all done on the fly by the SEQUENCE op code now. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Allow SEQUENCE to resize the slot table on the flyTrond Myklebust2012-12-063-80/+120
| | | | | | | | | | | | | | | | | | | | | Instead of an array of slots, use a singly linked list of slots that can be dynamically appended to or shrunk. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Support dynamic resizing of the session slot tableTrond Myklebust2012-12-062-5/+13
| | | | | | | | | | | | | | | | | | | | | | | | Allow the server to control the size of the session slot table by adjusting the value of sr_target_max_slots in the reply to the SEQUENCE operation. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Allow the server to recall all but one slotTrond Myklebust2012-12-061-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | If the server wants to leave us with only one slot, or it wants to "shrink" our slot table to something larger than we have now, then so be it. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Don't confuse target_highest_slotid and max_slots in cb_recall_slotTrond Myklebust2012-12-063-9/+7
| | | | | | | | | | | | | | | | | | Don't confuse the table size and the target_highest_slotid... Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Fix nfs4_callback_recallslot to work with dynamic slot allocationTrond Myklebust2012-12-063-1/+11
| | | | | | | | | | | | | | | | | | | | | Ensure that the NFSv4.1 CB_RECALL_SLOT callback updates the slot table target max slotid safely. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Reset the sequence number for slots that have been deallocatedTrond Myklebust2012-12-062-2/+20
| | | | | | | | | | | | | | | | | | | | | | | | When the server tells us that it is dynamically resizing the session replay cache, we should reset the sequence number for those slots that have been deallocated. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Ensure that the client tracks the server target_highest_slotidTrond Myklebust2012-12-064-7/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Dynamic slot allocation in NFSv4.1 depends on the client being able to track the server's target value for the highest slotid in the slot table. See the reference in Section 2.10.6.1 of RFC5661. To avoid ordering problems in the case where 2 SEQUENCE replies contain conflicting updates to this target value, we also introduce a generation counter, to track whether or not an RPC containing a SEQUENCE operation was launched before or after the last update. Also rename the nfs4_slot_table target_max_slots field to 'target_highest_slotid' to avoid confusion with a slot table size or number of slots. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | NFSv4.1: Clean up nfs4_free_slotTrond Myklebust2012-11-261-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change the argument to take the pointer to the slot, instead of just the slotid. We know that the new value of highest_used_slot must be less than the current value. No need to scan the whole table. Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>