summaryrefslogtreecommitdiffstats
path: root/fs/inode.c
diff options
context:
space:
mode:
authorJeff Layton <jlayton@primarydata.com>2015-01-16 15:05:54 -0500
committerJeff Layton <jlayton@primarydata.com>2015-01-16 15:05:54 -0500
commit4a075e39c86490cc0f0c10ac6abe3592d1689463 (patch)
tree8da8633f9f717128c02a08ad15b7d9f067091acb /fs/inode.c
parentdd459bb1974c5e9cff3dfbf4f6fdb3e9363ef32e (diff)
downloadlinux-4a075e39c86490cc0f0c10ac6abe3592d1689463.tar.gz
linux-4a075e39c86490cc0f0c10ac6abe3592d1689463.tar.bz2
linux-4a075e39c86490cc0f0c10ac6abe3592d1689463.zip
locks: add a new struct file_locking_context pointer to struct inode
The current scheme of using the i_flock list is really difficult to manage. There is also a legitimate desire for a per-inode spinlock to manage these lists that isn't the i_lock. Start conversion to a new scheme to eventually replace the old i_flock list with a new "file_lock_context" object. We start by adding a new i_flctx to struct inode. For now, it lives in parallel with i_flock list, but will eventually replace it. The idea is to allocate a structure to sit in that pointer and act as a locus for all things file locking. We allocate a file_lock_context for an inode when the first lock is added to it, and it's only freed when the inode is freed. We use the i_lock to protect the assignment, but afterward it should mostly be accessed locklessly. Signed-off-by: Jeff Layton <jlayton@primarydata.com> Acked-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'fs/inode.c')
-rw-r--r--fs/inode.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/fs/inode.c b/fs/inode.c
index aa149e7262ac..f30872ade6d7 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -194,7 +194,7 @@ int inode_init_always(struct super_block *sb, struct inode *inode)
#ifdef CONFIG_FSNOTIFY
inode->i_fsnotify_mask = 0;
#endif
-
+ inode->i_flctx = NULL;
this_cpu_inc(nr_inodes);
return 0;
@@ -237,6 +237,7 @@ void __destroy_inode(struct inode *inode)
BUG_ON(inode_has_buffers(inode));
security_inode_free(inode);
fsnotify_inode_delete(inode);
+ locks_free_lock_context(inode->i_flctx);
if (!inode->i_nlink) {
WARN_ON(atomic_long_read(&inode->i_sb->s_remove_count) == 0);
atomic_long_dec(&inode->i_sb->s_remove_count);