diff options
author | Eric Dumazet <edumazet@google.com> | 2016-06-13 20:21:50 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-06-15 14:08:34 -0700 |
commit | 1b5c5493e3e68181be344cb51bf9df192d05ffc2 (patch) | |
tree | 9c0f004d07c1fe3314b155695a93f2de51a315c6 /net/core | |
parent | 35c55c9877f8de0ab129fa1a309271d0ecc868b9 (diff) | |
download | linux-1b5c5493e3e68181be344cb51bf9df192d05ffc2.tar.gz linux-1b5c5493e3e68181be344cb51bf9df192d05ffc2.tar.bz2 linux-1b5c5493e3e68181be344cb51bf9df192d05ffc2.zip |
net_sched: add the ability to defer skb freeing
qdisc are changed under RTNL protection and often
while blocking BH and root qdisc spinlock.
When lots of skbs need to be dropped, we free
them under these locks causing TX/RX freezes,
and more generally latency spikes.
This commit adds rtnl_kfree_skbs(), used to queue
skbs for deferred freeing.
Actual freeing happens right after RTNL is released,
with appropriate scheduling points.
rtnl_qdisc_drop() can also be used in place
of disc_drop() when RTNL is held.
qdisc_reset_queue() and __qdisc_reset_queue() get
the new behavior, so standard qdiscs like pfifo, pfifo_fast...
have their ->reset() method automatically handled.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/core')
-rw-r--r-- | net/core/rtnetlink.c | 22 |
1 files changed, 22 insertions, 0 deletions
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c index d69c4644f8f2..eb49ca24274a 100644 --- a/net/core/rtnetlink.c +++ b/net/core/rtnetlink.c @@ -71,9 +71,31 @@ void rtnl_lock(void) } EXPORT_SYMBOL(rtnl_lock); +static struct sk_buff *defer_kfree_skb_list; +void rtnl_kfree_skbs(struct sk_buff *head, struct sk_buff *tail) +{ + if (head && tail) { + tail->next = defer_kfree_skb_list; + defer_kfree_skb_list = head; + } +} +EXPORT_SYMBOL(rtnl_kfree_skbs); + void __rtnl_unlock(void) { + struct sk_buff *head = defer_kfree_skb_list; + + defer_kfree_skb_list = NULL; + mutex_unlock(&rtnl_mutex); + + while (head) { + struct sk_buff *next = head->next; + + kfree_skb(head); + cond_resched(); + head = next; + } } void rtnl_unlock(void) |