diff options
author | Ilias Apalodimas <ilias.apalodimas@linaro.org> | 2021-07-16 10:02:18 +0300 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2021-07-16 11:37:00 -0700 |
commit | 2cc3aeb5ecccec0d266813172fcd82b4b5fa5803 (patch) | |
tree | 2f8486b71126e60458c257fd72c0561a8e68b8e6 /net | |
parent | 20192d9c9f6ae447c461285c915502ffbddf5696 (diff) | |
download | linux-stable-2cc3aeb5ecccec0d266813172fcd82b4b5fa5803.tar.gz linux-stable-2cc3aeb5ecccec0d266813172fcd82b4b5fa5803.tar.bz2 linux-stable-2cc3aeb5ecccec0d266813172fcd82b4b5fa5803.zip |
skbuff: Fix a potential race while recycling page_pool packets
As Alexander points out, when we are trying to recycle a cloned/expanded
SKB we might trigger a race. The recycling code relies on the
pp_recycle bit to trigger, which we carry over to cloned SKBs.
If that cloned SKB gets expanded or if we get references to the frags,
call skb_release_data() and overwrite skb->head, we are creating separate
instances accessing the same page frags. Since the skb_release_data()
will first try to recycle the frags, there's a potential race between
the original and cloned SKB, since both will have the pp_recycle bit set.
Fix this by explicitly those SKBs not recyclable.
The atomic_sub_return effectively limits us to a single release case,
and when we are calling skb_release_data we are also releasing the
option to perform the recycling, or releasing the pages from the page pool.
Fixes: 6a5bcd84e886 ("page_pool: Allow drivers to hint on SKB recycling")
Reported-by: Alexander Duyck <alexanderduyck@fb.com>
Suggested-by: Alexander Duyck <alexanderduyck@fb.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net')
-rw-r--r-- | net/core/skbuff.c | 13 |
1 files changed, 12 insertions, 1 deletions
diff --git a/net/core/skbuff.c b/net/core/skbuff.c index f63de967ac25..0fe97d660790 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -663,7 +663,7 @@ static void skb_release_data(struct sk_buff *skb) if (skb->cloned && atomic_sub_return(skb->nohdr ? (1 << SKB_DATAREF_SHIFT) + 1 : 1, &shinfo->dataref)) - return; + goto exit; skb_zcopy_clear(skb, true); @@ -674,6 +674,17 @@ static void skb_release_data(struct sk_buff *skb) kfree_skb_list(shinfo->frag_list); skb_free_head(skb); +exit: + /* When we clone an SKB we copy the reycling bit. The pp_recycle + * bit is only set on the head though, so in order to avoid races + * while trying to recycle fragments on __skb_frag_unref() we need + * to make one SKB responsible for triggering the recycle path. + * So disable the recycling bit if an SKB is cloned and we have + * additional references to to the fragmented part of the SKB. + * Eventually the last SKB will have the recycling bit set and it's + * dataref set to 0, which will trigger the recycling + */ + skb->pp_recycle = 0; } /* |