diff options
author | Nikolay Borisov <nborisov@suse.com> | 2018-02-14 14:37:26 +0200 |
---|---|---|
committer | David Sterba <dsterba@suse.com> | 2018-03-31 01:26:51 +0200 |
commit | 2e32ef87b074cb8098436634b649b4b2b523acbe (patch) | |
tree | fdd5d24ad40034addd3bf1eb19c8a21e2d291e1e /fs/btrfs/locking.c | |
parent | 7c829b722dffb22aaf9e3ea1b1d88dac49bd0768 (diff) | |
download | linux-stable-2e32ef87b074cb8098436634b649b4b2b523acbe.tar.gz linux-stable-2e32ef87b074cb8098436634b649b4b2b523acbe.tar.bz2 linux-stable-2e32ef87b074cb8098436634b649b4b2b523acbe.zip |
btrfs: Relax memory barrier in btrfs_tree_unlock
When performing an unlock on an extent buffer we'd like to order the
decrement of extent_buffer::blocking_writers with waking up any
waiters. In such situations it's sufficient to use smp_mb__after_atomic
rather than the heavy smp_mb. On architectures where atomic operations
are fully ordered (such as x86 or s390) unconditionally executing
a heavyweight smp_mb instruction causes a severe hit to performance
while bringin no improvements in terms of correctness.
The better thing is to use the appropriate smp_mb__after_atomic routine
which will do the correct thing (invoke a full smp_mb or in the case
of ordered atomics insert a compiler barrier). Put another way,
an RMW atomic op + smp_load__after_atomic equals, in terms of
semantics, to a full smp_mb. This ensures that none of the problems
described in the accompanying comment of waitqueue_active occur.
No functional changes.
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'fs/btrfs/locking.c')
-rw-r--r-- | fs/btrfs/locking.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/btrfs/locking.c b/fs/btrfs/locking.c index d13128c70ddd..621083f8932c 100644 --- a/fs/btrfs/locking.c +++ b/fs/btrfs/locking.c @@ -290,7 +290,7 @@ void btrfs_tree_unlock(struct extent_buffer *eb) /* * Make sure counter is updated before we wake up waiters. */ - smp_mb(); + smp_mb__after_atomic(); if (waitqueue_active(&eb->write_lock_wq)) wake_up(&eb->write_lock_wq); } else { |