summaryrefslogtreecommitdiffstats
path: root/net
diff options
context:
space:
mode:
authorMagnus Karlsson <magnus.karlsson@intel.com>2022-03-28 16:21:20 +0200
committerAlexei Starovoitov <ast@kernel.org>2022-03-28 19:56:28 -0700
commita95a4d9b39b0324402569ed7395aae59b8fd2b11 (patch)
tree4b4d622f23f26e50fffadefb1382f9230ed108ad /net
parent7df482e62282fb7839b033e332446f75b94e21c4 (diff)
downloadlinux-stable-a95a4d9b39b0324402569ed7395aae59b8fd2b11.tar.gz
linux-stable-a95a4d9b39b0324402569ed7395aae59b8fd2b11.tar.bz2
linux-stable-a95a4d9b39b0324402569ed7395aae59b8fd2b11.zip
xsk: Do not write NULL in SW ring at allocation failure
For the case when xp_alloc_batch() is used but the batched allocation cannot be used, there is a slow path that uses the non-batched xp_alloc(). When it fails to allocate an entry, it returns NULL. The current code wrote this NULL into the entry of the provided results array (pointer to the driver SW ring usually) and returned. This might not be what the driver expects and to make things simpler, just write successfully allocated xdp_buffs into the SW ring,. The driver might have information in there that is still important after an allocation failure. Note that at this point in time, there are no drivers using xp_alloc_batch() that could trigger this slow path. But one might get added. Fixes: 47e4075df300 ("xsk: Batched buffer allocation for the pool") Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220328142123.170157-2-maciej.fijalkowski@intel.com
Diffstat (limited to 'net')
-rw-r--r--net/xdp/xsk_buff_pool.c8
1 files changed, 6 insertions, 2 deletions
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index b34fca6ada86..af040ffa14ff 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -591,9 +591,13 @@ u32 xp_alloc_batch(struct xsk_buff_pool *pool, struct xdp_buff **xdp, u32 max)
u32 nb_entries1 = 0, nb_entries2;
if (unlikely(pool->dma_need_sync)) {
+ struct xdp_buff *buff;
+
/* Slow path */
- *xdp = xp_alloc(pool);
- return !!*xdp;
+ buff = xp_alloc(pool);
+ if (buff)
+ *xdp = buff;
+ return !!buff;
}
if (unlikely(pool->free_list_cnt)) {