summaryrefslogtreecommitdiffstats
path: root/fs/btrfs/qgroup.c
diff options
context:
space:
mode:
authorMark Fasheh <mfasheh@suse.de>2015-11-05 14:38:00 -0800
committerChris Mason <clm@fb.com>2015-11-25 05:27:33 -0800
commit82bd101b5240d3d1c4078a8017917a40c0dcc514 (patch)
tree0105605d854d46179bd8805b91994510f5b79365 /fs/btrfs/qgroup.c
parent2d9e97761087b46192c18181dfd1e7a930defcfd (diff)
downloadlinux-82bd101b5240d3d1c4078a8017917a40c0dcc514.tar.gz
linux-82bd101b5240d3d1c4078a8017917a40c0dcc514.tar.bz2
linux-82bd101b5240d3d1c4078a8017917a40c0dcc514.zip
btrfs: qgroup: account shared subtree during snapshot delete
Commit 0ed4792 ('btrfs: qgroup: Switch to new extent-oriented qgroup mechanism.') removed our qgroup accounting during btrfs_drop_snapshot(). Predictably, this results in qgroup numbers going bad shortly after a snapshot is removed. Fix this by adding a dirty extent record when we encounter extents during our shared subtree walk. This effectively restores the functionality we had with the original shared subtree walking code in 1152651 (btrfs: qgroup: account shared subtrees during snapshot delete). The idea with the original patch (and this one) is that shared subtrees can get skipped during drop_snapshot. The shared subtree walk then allows us a chance to visit those extents and add them to the qgroup work for later processing. This ultimately makes the accounting for drop snapshot work. The new qgroup code nicely handles all the other extents during the tree walk via the ref dec/inc functions so we don't have to add actions beyond what we had originally. Signed-off-by: Mark Fasheh <mfasheh@suse.de> Signed-off-by: Chris Mason <clm@fb.com>
Diffstat (limited to 'fs/btrfs/qgroup.c')
-rw-r--r--fs/btrfs/qgroup.c2
1 files changed, 2 insertions, 0 deletions
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index fd0a196c8f74..5279fdae7142 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1462,6 +1462,8 @@ struct btrfs_qgroup_extent_record
struct btrfs_qgroup_extent_record *entry;
u64 bytenr = record->bytenr;
+ assert_spin_locked(&delayed_refs->lock);
+
while (*p) {
parent_node = *p;
entry = rb_entry(parent_node, struct btrfs_qgroup_extent_record,