summaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorJohn Fastabend <john.fastabend@gmail.com>2021-11-03 13:47:33 -0700
committerDaniel Borkmann <daniel@iogearbox.net>2021-11-09 00:57:19 +0100
commitb8b8315e39ffaca82e79d86dde26e9144addf66b (patch)
tree39cae3188e0b1daefd11bca7d479801e15758114 /net
parent40a34121ac1dc52ed9cd34a8f4e48e32517a52fd (diff)
downloadlinux-stable-b8b8315e39ffaca82e79d86dde26e9144addf66b.tar.gz
linux-stable-b8b8315e39ffaca82e79d86dde26e9144addf66b.tar.bz2
linux-stable-b8b8315e39ffaca82e79d86dde26e9144addf66b.zip
bpf, sockmap: Remove unhash handler for BPF sockmap usage
We do not need to handle unhash from BPF side we can simply wait for the close to happen. The original concern was a socket could transition from ESTABLISHED state to a new state while the BPF hook was still attached. But, we convinced ourself this is no longer possible and we also improved BPF sockmap to handle listen sockets so this is no longer a problem. More importantly though there are cases where unhash is called when data is in the receive queue. The BPF unhash logic will flush this data which is wrong. To be correct it should keep the data in the receive queue and allow a receiving application to continue reading the data. This may happen when tcp_abort() is received for example. Instead of complicating the logic in unhash simply moving all this to tcp_close() hook solves this. Fixes: 51199405f9672 ("bpf: skb_verdict, support SK_PASS on RX BPF path") Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Jussi Maki <joamaki@gmail.com> Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com> Link: https://lore.kernel.org/bpf/20211103204736.248403-3-john.fastabend@gmail.com
Diffstat (limited to 'net')
-rw-r--r--net/ipv4/tcp_bpf.c1
1 files changed, 0 insertions, 1 deletions
diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index 5f4d6f45d87f..246f725b78c9 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -475,7 +475,6 @@ static void tcp_bpf_rebuild_protos(struct proto prot[TCP_BPF_NUM_CFGS],
struct proto *base)
{
prot[TCP_BPF_BASE] = *base;
- prot[TCP_BPF_BASE].unhash = sock_map_unhash;
prot[TCP_BPF_BASE].close = sock_map_close;
prot[TCP_BPF_BASE].recvmsg = tcp_bpf_recvmsg;
prot[TCP_BPF_BASE].sock_is_readable = sk_msg_is_readable;