diff options
author | Magnus Karlsson <magnus.karlsson@intel.com> | 2019-02-08 14:13:50 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2020-01-27 14:50:21 +0100 |
commit | 9aea648830b3709b32d898dc592b6aa5a3273379 (patch) | |
tree | 2e317a7ab1abf82d423b3186dafbe862e5bb3d69 | |
parent | 7bfcb0230e4a99aecdec1482edd88dec49f1c7ef (diff) | |
download | linux-stable-9aea648830b3709b32d898dc592b6aa5a3273379.tar.gz linux-stable-9aea648830b3709b32d898dc592b6aa5a3273379.tar.bz2 linux-stable-9aea648830b3709b32d898dc592b6aa5a3273379.zip |
xsk: add missing smp_rmb() in xsk_mmap
[ Upstream commit e6762c8bcf982821935a2b1cb33cf8335d0eefae ]
All the setup code in AF_XDP is protected by a mutex with the
exception of the mmap code that cannot use it. To make sure that a
process banging on the mmap call at the same time as another process
is setting up the socket, smp_wmb() calls were added in the umem
registration code and the queue creation code, so that the published
structures that xsk_mmap needs would be consistent. However, the
corresponding smp_rmb() calls were not added to the xsk_mmap
code. This patch adds these calls.
Fixes: 37b076933a8e3 ("xsk: add missing write- and data-dependency barrier")
Fixes: c0c77d8fb787c ("xsk: add user memory registration support sockopt")
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
-rw-r--r-- | net/xdp/xsk.c | 4 |
1 files changed, 4 insertions, 0 deletions
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index ff15207036dc..547fc4554b22 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -661,6 +661,8 @@ static int xsk_mmap(struct file *file, struct socket *sock, if (!umem) return -EINVAL; + /* Matches the smp_wmb() in XDP_UMEM_REG */ + smp_rmb(); if (offset == XDP_UMEM_PGOFF_FILL_RING) q = READ_ONCE(umem->fq); else if (offset == XDP_UMEM_PGOFF_COMPLETION_RING) @@ -670,6 +672,8 @@ static int xsk_mmap(struct file *file, struct socket *sock, if (!q) return -EINVAL; + /* Matches the smp_wmb() in xsk_init_queue */ + smp_rmb(); qpg = virt_to_head_page(q->ring); if (size > (PAGE_SIZE << compound_order(qpg))) return -EINVAL; |