diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2011-05-23 11:02:42 +0000 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2011-06-03 10:34:00 +0900 |
commit | 1003a81b92683e72019b8160ac59c7a2651a74e5 (patch) | |
tree | 834dacc954345519b5b6ec9a78ac421286173bd3 /net | |
parent | 6c063e6ac38b809e6039ebe59d50d28d1db502c6 (diff) | |
download | linux-stable-1003a81b92683e72019b8160ac59c7a2651a74e5.tar.gz linux-stable-1003a81b92683e72019b8160ac59c7a2651a74e5.tar.bz2 linux-stable-1003a81b92683e72019b8160ac59c7a2651a74e5.zip |
sch_sfq: avoid giving spurious NET_XMIT_CN signals
[ Upstream commit 8efa885406359af300d46910642b50ca82c0fe47 ]
While chasing a possible net_sched bug, I found that IP fragments have
litle chance to pass a congestioned SFQ qdisc :
- Say SFQ qdisc is full because one flow is non responsive.
- ip_fragment() wants to send two fragments belonging to an idle flow.
- sfq_enqueue() queues first packet, but see queue limit reached :
- sfq_enqueue() drops one packet from 'big consumer', and returns
NET_XMIT_CN.
- ip_fragment() cancel remaining fragments.
This patch restores fairness, making sure we return NET_XMIT_CN only if
we dropped a packet from the same flow.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Patrick McHardy <kaber@trash.net>
CC: Jarek Poplawski <jarkao2@gmail.com>
CC: Jamal Hadi Salim <hadi@cyberus.ca>
CC: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'net')
-rw-r--r-- | net/sched/sch_sfq.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c index edea8cefec6c..17289d706868 100644 --- a/net/sched/sch_sfq.c +++ b/net/sched/sch_sfq.c @@ -361,7 +361,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch) { struct sfq_sched_data *q = qdisc_priv(sch); unsigned int hash; - sfq_index x; + sfq_index x, qlen; struct sfq_slot *slot; int uninitialized_var(ret); @@ -405,8 +405,12 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch) if (++sch->q.qlen <= q->limit) return NET_XMIT_SUCCESS; + qlen = slot->qlen; sfq_drop(sch); - return NET_XMIT_CN; + /* Return Congestion Notification only if we dropped a packet + * from this flow. + */ + return (qlen != slot->qlen) ? NET_XMIT_CN : NET_XMIT_SUCCESS; } static struct sk_buff * |