diff options
author | Magnus Karlsson <magnus.karlsson@intel.com> | 2020-09-10 10:31:04 +0200 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2020-09-14 18:38:11 -0700 |
commit | 3131cf66d3033aa40db5b5d72a2673315b61c862 (patch) | |
tree | ae29ca1e0953972061cd7548512b23489ae0d4a1 /samples | |
parent | d72714c1da138e6755d3bd14662dc5b7f17fae7f (diff) | |
download | linux-stable-3131cf66d3033aa40db5b5d72a2673315b61c862.tar.gz linux-stable-3131cf66d3033aa40db5b5d72a2673315b61c862.tar.bz2 linux-stable-3131cf66d3033aa40db5b5d72a2673315b61c862.zip |
samples/bpf: Fix one packet sending in xdpsock
Fix the sending of a single packet (or small burst) in xdpsock when
executing in copy mode. Currently, the l2fwd application in xdpsock
only transmits the packets after a batch of them has been received,
which might be confusing if you only send one packet and expect that
it is returned pronto. Fix this by calling sendto() more often and add
a comment in the code that states that this can be optimized if
needed.
Reported-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com>
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/1599726666-8431-2-git-send-email-magnus.karlsson@gmail.com
Diffstat (limited to 'samples')
-rw-r--r-- | samples/bpf/xdpsock_user.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c index 4cead341ae57..b6175cb9a31d 100644 --- a/samples/bpf/xdpsock_user.c +++ b/samples/bpf/xdpsock_user.c @@ -897,6 +897,14 @@ static inline void complete_tx_l2fwd(struct xsk_socket_info *xsk, if (!xsk->outstanding_tx) return; + /* In copy mode, Tx is driven by a syscall so we need to use e.g. sendto() to + * really send the packets. In zero-copy mode we do not have to do this, since Tx + * is driven by the NAPI loop. So as an optimization, we do not have to call + * sendto() all the time in zero-copy mode for l2fwd. + */ + if (opt_xdp_bind_flags & XDP_COPY) + kick_tx(xsk); + ndescs = (xsk->outstanding_tx > opt_batch_size) ? opt_batch_size : xsk->outstanding_tx; |