diff options
author | Eric Dumazet <edumazet@google.com> | 2017-09-23 12:39:12 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2019-01-26 09:42:55 +0100 |
commit | de614973ee159fef48ca6255a7324cb64ea31f44 (patch) | |
tree | 3eeb318848d4c33c954672c706c762734e2b9938 | |
parent | e660576a53db1f4d73c786fe3a2c67fc2d878d93 (diff) | |
download | linux-stable-de614973ee159fef48ca6255a7324cb64ea31f44.tar.gz linux-stable-de614973ee159fef48ca6255a7324cb64ea31f44.tar.bz2 linux-stable-de614973ee159fef48ca6255a7324cb64ea31f44.zip |
net: speed up skb_rbtree_purge()
commit 7c90584c66cc4b033a3b684b0e0950f79e7b7166 upstream.
As measured in my prior patch ("sch_netem: faster rb tree removal"),
rbtree_postorder_for_each_entry_safe() is nice looking but much slower
than using rb_next() directly, except when tree is small enough
to fit in CPU caches (then the cost is the same)
Also note that there is not even an increase of text size :
$ size net/core/skbuff.o.before net/core/skbuff.o
text data bss dec hex filename
40711 1298 0 42009 a419 net/core/skbuff.o.before
40711 1298 0 42009 a419 net/core/skbuff.o
From: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Mao Wenan <maowenan@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | net/core/skbuff.c | 11 |
1 files changed, 7 insertions, 4 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 9703924ed071..8a57bbaf7452 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2388,12 +2388,15 @@ EXPORT_SYMBOL(skb_queue_purge); */ void skb_rbtree_purge(struct rb_root *root) { - struct sk_buff *skb, *next; + struct rb_node *p = rb_first(root); - rbtree_postorder_for_each_entry_safe(skb, next, root, rbnode) - kfree_skb(skb); + while (p) { + struct sk_buff *skb = rb_entry(p, struct sk_buff, rbnode); - *root = RB_ROOT; + p = rb_next(p); + rb_erase(&skb->rbnode, root); + kfree_skb(skb); + } } /** |