diff options
author | Filipe Manana <fdmanana@suse.com> | 2014-10-13 12:28:39 +0100 |
---|---|---|
committer | Chris Mason <clm@fb.com> | 2014-11-20 17:14:29 -0800 |
commit | c8fd3de79f44f5d41bc3a801214faf667b95df9d (patch) | |
tree | 493fbd0a05415f8aa2416564b19cea6c3544564a /fs/btrfs/extent_io.c | |
parent | e38e2ed701ff5f3d889c8dda5fe863e165e60d61 (diff) | |
download | linux-c8fd3de79f44f5d41bc3a801214faf667b95df9d.tar.gz linux-c8fd3de79f44f5d41bc3a801214faf667b95df9d.tar.bz2 linux-c8fd3de79f44f5d41bc3a801214faf667b95df9d.zip |
Btrfs: avoid returning -ENOMEM in convert_extent_bit() too early
We try to allocate an extent state before acquiring the tree's spinlock
just in case we end up needing to split an existing extent state into two.
If that allocation failed, we would return -ENOMEM.
However, our only single caller (transaction/log commit code), passes in
an extent state that was cached from a call to find_first_extent_bit() and
that has a very high chance to match exactly the input range (always true
for a transaction commit and very often, but not always, true for a log
commit) - in this case we end up not needing at all that initial extent
state used for an eventual split. Therefore just don't return -ENOMEM if
we can't allocate the temporary extent state, since we might not need it
at all, and if we end up needing one, we'll do it later anyway.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
Diffstat (limited to 'fs/btrfs/extent_io.c')
-rw-r--r-- | fs/btrfs/extent_io.c | 11 |
1 files changed, 10 insertions, 1 deletions
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c index 0d931b143c00..654ed3de0054 100644 --- a/fs/btrfs/extent_io.c +++ b/fs/btrfs/extent_io.c @@ -1066,13 +1066,21 @@ int convert_extent_bit(struct extent_io_tree *tree, u64 start, u64 end, int err = 0; u64 last_start; u64 last_end; + bool first_iteration = true; btrfs_debug_check_extent_io_range(tree, start, end); again: if (!prealloc && (mask & __GFP_WAIT)) { + /* + * Best effort, don't worry if extent state allocation fails + * here for the first iteration. We might have a cached state + * that matches exactly the target range, in which case no + * extent state allocations are needed. We'll only know this + * after locking the tree. + */ prealloc = alloc_extent_state(mask); - if (!prealloc) + if (!prealloc && !first_iteration) return -ENOMEM; } @@ -1242,6 +1250,7 @@ search_again: spin_unlock(&tree->lock); if (mask & __GFP_WAIT) cond_resched(); + first_iteration = false; goto again; } |