summaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorEric Dumazet <edumazet@google.com>2018-10-31 08:39:13 -0700
committerDavid S. Miller <davem@davemloft.net>2018-11-03 15:40:01 -0700
commitfe60faa5063822f2d555f4f326c7dd72a60929bf (patch)
tree4bf57a47c7cbb4c077b606ae00ede02849896e19 /net
parent3e59020abf0f88182730527ee5b862e786eb485a (diff)
downloadlinux-fe60faa5063822f2d555f4f326c7dd72a60929bf.tar.gz
linux-fe60faa5063822f2d555f4f326c7dd72a60929bf.tar.bz2
linux-fe60faa5063822f2d555f4f326c7dd72a60929bf.zip
net: do not abort bulk send on BQL status
Before calling dev_hard_start_xmit(), upper layers tried to cook optimal skb list based on BQL budget. Problem is that GSO packets can end up comsuming more than the BQL budget. Breaking the loop is not useful, since requeued packets are ahead of any packets still in the qdisc. It is also more expensive, since next TX completion will push these packets later, while skbs are not in cpu caches. It is also a behavior difference with TSO packets, that can break the BQL limit by a large amount. Note that drivers should use __netdev_tx_sent_queue() in order to have optimal xmit_more support, and avoid useless atomic operations as shown in the following patch. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r--net/core/dev.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/net/core/dev.c b/net/core/dev.c
index 77d43ae2a7bb..0ffcbdd55fa9 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3272,7 +3272,7 @@ struct sk_buff *dev_hard_start_xmit(struct sk_buff *first, struct net_device *de
}
skb = next;
- if (netif_xmit_stopped(txq) && skb) {
+ if (netif_tx_queue_stopped(txq) && skb) {
rc = NETDEV_TX_BUSY;
break;
}