summaryrefslogtreecommitdiffstats
path: root/include/linux/quotaops.h
diff options
context:
space:
mode:
authorDave Chinner <dchinner@redhat.com>2011-03-22 22:23:36 +1100
committerAl Viro <viro@zeniv.linux.org.uk>2011-03-24 21:16:31 -0400
commit250df6ed274d767da844a5d9f05720b804240197 (patch)
treeb74f49a86c4451d9e3e82f90e3f791163025be21 /include/linux/quotaops.h
parent3dc8fe4dca9cd3e4aa828ed36451e2bcfd2350da (diff)
downloadlinux-stable-250df6ed274d767da844a5d9f05720b804240197.tar.gz
linux-stable-250df6ed274d767da844a5d9f05720b804240197.tar.bz2
linux-stable-250df6ed274d767da844a5d9f05720b804240197.zip
fs: protect inode->i_state with inode->i_lock
Protect inode state transitions and validity checks with the inode->i_lock. This enables us to make inode state transitions independently of the inode_lock and is the first step to peeling away the inode_lock from the code. This requires that __iget() is done atomically with i_state checks during list traversals so that we don't race with another thread marking the inode I_FREEING between the state check and grabbing the reference. Also remove the unlock_new_inode() memory barrier optimisation required to avoid taking the inode_lock when clearing I_NEW. Simplify the code by simply taking the inode->i_lock around the state change and wakeup. Because the wakeup is no longer tricky, remove the wake_up_inode() function and open code the wakeup where necessary. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Diffstat (limited to 'include/linux/quotaops.h')
-rw-r--r--include/linux/quotaops.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h
index eb354f6f26b3..26f9e3612e0f 100644
--- a/include/linux/quotaops.h
+++ b/include/linux/quotaops.h
@@ -277,7 +277,7 @@ static inline int dquot_alloc_space(struct inode *inode, qsize_t nr)
/*
* Mark inode fully dirty. Since we are allocating blocks, inode
* would become fully dirty soon anyway and it reportedly
- * reduces inode_lock contention.
+ * reduces lock contention.
*/
mark_inode_dirty(inode);
}