diff options
author | Paolo Abeni <pabeni@redhat.com> | 2019-03-20 11:02:05 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2019-03-20 11:18:55 -0700 |
commit | b71b5837f8711dbc4bc0424cb5c75e5921be055c (patch) | |
tree | 4a731ba20e46226c135ae28213de18152338a2dd /net/packet | |
parent | 4bd97d51a5e602ea1fbdab8c2d653513dea17115 (diff) | |
download | linux-stable-b71b5837f8711dbc4bc0424cb5c75e5921be055c.tar.gz linux-stable-b71b5837f8711dbc4bc0424cb5c75e5921be055c.tar.bz2 linux-stable-b71b5837f8711dbc4bc0424cb5c75e5921be055c.zip |
packet: rework packet_pick_tx_queue() to use common code selection
Currently packet_pick_tx_queue() is the only caller of
ndo_select_queue() using a fallback argument other than
netdev_pick_tx.
Leveraging rx queue, we can obtain a similar queue selection
behavior using core helpers. After this change, ndo_select_queue()
is always invoked with netdev_pick_tx() as fallback.
We can change ndo_select_queue() signature in a followup patch,
dropping an indirect call per transmitted packet in some scenarios
(e.g. TCP syn and XDP generic xmit)
This changes slightly how af packet queue selection happens when
PACKET_QDISC_BYPASS is set. It's now more similar to plan dev_queue_xmit()
tacking in account both XPS and TC mapping.
v1 -> v2:
- rebased after helper name change
RFC -> v1:
- initialize sender_cpu to the expected value
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/packet')
-rw-r--r-- | net/packet/af_packet.c | 15 |
1 files changed, 7 insertions, 8 deletions
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c index 323655a25674..a8809dc0e1ab 100644 --- a/net/packet/af_packet.c +++ b/net/packet/af_packet.c @@ -275,24 +275,23 @@ static bool packet_use_direct_xmit(const struct packet_sock *po) return po->xmit == packet_direct_xmit; } -static u16 __packet_pick_tx_queue(struct net_device *dev, struct sk_buff *skb, - struct net_device *sb_dev) -{ - return dev_pick_tx_cpu_id(dev, skb, sb_dev, NULL); -} - static u16 packet_pick_tx_queue(struct sk_buff *skb) { struct net_device *dev = skb->dev; const struct net_device_ops *ops = dev->netdev_ops; + int cpu = raw_smp_processor_id(); u16 queue_index; +#ifdef CONFIG_XPS + skb->sender_cpu = cpu + 1; +#endif + skb_record_rx_queue(skb, cpu % dev->real_num_tx_queues); if (ops->ndo_select_queue) { queue_index = ops->ndo_select_queue(dev, skb, NULL, - __packet_pick_tx_queue); + netdev_pick_tx); queue_index = netdev_cap_txqueue(dev, queue_index); } else { - queue_index = __packet_pick_tx_queue(dev, skb, NULL); + queue_index = netdev_pick_tx(dev, skb, NULL); } return queue_index; |