diff options
author | John Fastabend <john.fastabend@gmail.com> | 2020-01-26 16:14:00 -0800 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2020-01-27 11:16:20 +0100 |
commit | 42a84a8cd0ff0cbff5a4595e1304c4567a30267d (patch) | |
tree | 0cabe8753a6f84619ddce2340490ddd72120f3f8 /kernel | |
parent | 90435a7891a2259b0f74c5a1bc5600d0d64cba8f (diff) | |
download | linux-42a84a8cd0ff0cbff5a4595e1304c4567a30267d.tar.gz linux-42a84a8cd0ff0cbff5a4595e1304c4567a30267d.tar.bz2 linux-42a84a8cd0ff0cbff5a4595e1304c4567a30267d.zip |
bpf, xdp: Update devmap comments to reflect napi/rcu usage
Now that we rely on synchronize_rcu and call_rcu waiting to
exit perempt-disable regions (NAPI) lets update the comments
to reflect this.
Fixes: 0536b85239b84 ("xdp: Simplify devmap cleanup")
Signed-off-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Björn Töpel <bjorn.topel@intel.com>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/1580084042-11598-2-git-send-email-john.fastabend@gmail.com
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/bpf/devmap.c | 21 |
1 files changed, 11 insertions, 10 deletions
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index f3a44f6ca86e..373c6a30d8a5 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -190,10 +190,12 @@ static void dev_map_free(struct bpf_map *map) /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0, * so the programs (can be more than one that used this map) were - * disconnected from events. Wait for outstanding critical sections in - * these programs to complete. The rcu critical section only guarantees - * no further reads against netdev_map. It does __not__ ensure pending - * flush operations (if any) are complete. + * disconnected from events. The following synchronize_rcu() guarantees + * both rcu read critical sections complete and waits for + * preempt-disable regions (NAPI being the relevant context here) so we + * are certain there will be no further reads against the netdev_map and + * all flush operations are complete. Flush operations can only be done + * from NAPI context for this reason. */ spin_lock(&dev_map_lock); @@ -503,12 +505,11 @@ static int dev_map_delete_elem(struct bpf_map *map, void *key) return -EINVAL; /* Use call_rcu() here to ensure any rcu critical sections have - * completed, but this does not guarantee a flush has happened - * yet. Because driver side rcu_read_lock/unlock only protects the - * running XDP program. However, for pending flush operations the - * dev and ctx are stored in another per cpu map. And additionally, - * the driver tear down ensures all soft irqs are complete before - * removing the net device in the case of dev_put equals zero. + * completed as well as any flush operations because call_rcu + * will wait for preempt-disable region to complete, NAPI in this + * context. And additionally, the driver tear down ensures all + * soft irqs are complete before removing the net device in the + * case of dev_put equals zero. */ old_dev = xchg(&dtab->netdev_map[k], NULL); if (old_dev) |