diff options
author | Ilya Dryomov <idryomov@gmail.com> | 2018-11-08 15:55:37 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2018-11-27 16:09:42 +0100 |
commit | c3ec4e5bda441079e8b5b02bd9e5edd544132123 (patch) | |
tree | 6ea0decf7d4e40ca49e3c796572cfcd808776558 | |
parent | ab26f7fd578afad4eec4b4282fb6324f1940f50d (diff) | |
download | linux-stable-c3ec4e5bda441079e8b5b02bd9e5edd544132123.tar.gz linux-stable-c3ec4e5bda441079e8b5b02bd9e5edd544132123.tar.bz2 linux-stable-c3ec4e5bda441079e8b5b02bd9e5edd544132123.zip |
libceph: fall back to sendmsg for slab pages
commit 7e241f647dc7087a0401418a187f3f5b527cc690 upstream.
skb_can_coalesce() allows coalescing neighboring slab objects into
a single frag:
return page == skb_frag_page(frag) &&
off == frag->page_offset + skb_frag_size(frag);
ceph_tcp_sendpage() can be handed slab pages. One example of this is
XFS: it passes down sector sized slab objects for its metadata I/O. If
the kernel client is co-located on the OSD node, the skb may go through
loopback and pop on the receive side with the exact same set of frags.
When tcp_recvmsg() attempts to copy out such a frag, hardened usercopy
complains because the size exceeds the object's allocated size:
usercopy: kernel memory exposure attempt detected from ffff9ba917f20a00 (kmalloc-512) (1024 bytes)
Although skb_can_coalesce() could be taught to return false if the
resulting frag would cross a slab object boundary, we already have
a fallback for non-refcounted pages. Utilize it for slab pages too.
Cc: stable@vger.kernel.org # 4.8+
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | net/ceph/messenger.c | 12 |
1 files changed, 9 insertions, 3 deletions
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c index 98ea28dc03f9..68acf94fae72 100644 --- a/net/ceph/messenger.c +++ b/net/ceph/messenger.c @@ -588,9 +588,15 @@ static int ceph_tcp_sendpage(struct socket *sock, struct page *page, int ret; struct kvec iov; - /* sendpage cannot properly handle pages with page_count == 0, - * we need to fallback to sendmsg if that's the case */ - if (page_count(page) >= 1) + /* + * sendpage cannot properly handle pages with page_count == 0, + * we need to fall back to sendmsg if that's the case. + * + * Same goes for slab pages: skb_can_coalesce() allows + * coalescing neighboring slab objects into a single frag which + * triggers one of hardened usercopy checks. + */ + if (page_count(page) >= 1 && !PageSlab(page)) return __ceph_tcp_sendpage(sock, page, offset, size, more); iov.iov_base = kmap(page) + offset; |