diff options
author | David S. Miller <davem@davemloft.net> | 2017-10-18 12:12:19 +0100 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2017-10-18 12:12:19 +0100 |
commit | 452606d6c9cd7cd6d1037d06763c687f617e795f (patch) | |
tree | 426663736699bf8fb062846ca34e601413865732 /net/netlink/af_netlink.c | |
parent | 4b70c62b9eafcee0505b440732d2e00c50f3085d (diff) | |
parent | fad3917e361b115f776563366415ffb2fc706bf1 (diff) | |
download | linux-452606d6c9cd7cd6d1037d06763c687f617e795f.tar.gz linux-452606d6c9cd7cd6d1037d06763c687f617e795f.tar.bz2 linux-452606d6c9cd7cd6d1037d06763c687f617e795f.zip |
Merge branch 'bpf-cpumap-type-for-XDP_REDIRECT'
Jesper Dangaard Brouer says:
====================
net: New bpf cpumap type for XDP_REDIRECT
Introducing a new way to redirect XDP frames. Notice how no driver
changes are necessary given the design of XDP_REDIRECT.
This redirect map type is called 'cpumap', as it allows redirection
XDP frames to remote CPUs. The remote CPU will do the SKB allocation
and start the network stack invocation on that CPU.
This is a scalability and isolation mechanism, that allow separating
the early driver network XDP layer, from the rest of the netstack, and
assigning dedicated CPUs for this stage. The sysadm control/configure
the RX-CPU to NIC-RX queue (as usual) via procfs smp_affinity and how
many queues are configured via ethtool --set-channels. Benchmarks
show that a single CPU can handle approx 11Mpps. Thus, only assigning
two NIC RX-queues (and two CPUs) is sufficient for handling 10Gbit/s
wirespeed smallest packet 14.88Mpps. Reducing the number of queues
have the advantage that more packets being "bulk" available per hard
interrupt[1].
[1] https://www.netdevconf.org/2.1/papers/BusyPollingNextGen.pdf
Use-cases:
1. End-host based pre-filtering for DDoS mitigation. This is fast
enough to allow software to see and filter all packets wirespeed.
Thus, no packets getting silently dropped by hardware.
2. Given NIC HW unevenly distributes packets across RX queue, this
mechanism can be used for redistribution load across CPUs. This
usually happens when HW is unaware of a new protocol. This
resembles RPS (Receive Packet Steering), just faster, but with more
responsibility placed on the BPF program for correct steering.
3. Auto-scaling or power saving via only activating the appropriate
number of remote CPUs for handling the current load. The cpumap
tracepoints can function as a feedback loop for this purpose.
In V7, a --stress-mode was implemented for the samples program, which
between each stats update, adds + removes CPUs from the map
concurrently with traffic. I did find and fix some concurrency issues
in the tear-down path, details in patch desc. The stress test have
now been running for 15 hours without any issues, while being
bombarded with 11.6 Mpps via pktgen_sample04_many_flows.sh.
See individual patches for patchset-version changes.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/netlink/af_netlink.c')
0 files changed, 0 insertions, 0 deletions