diff options
author | Qu Wenruo <wqu@suse.com> | 2018-05-15 14:57:51 +0800 |
---|---|---|
committer | David Sterba <dsterba@suse.com> | 2018-05-30 16:46:38 +0200 |
commit | 314bfa473b6b6d3efe68011899bd718b349f29d7 (patch) | |
tree | 92de19197d185ad33f1dd98495b6053d4eded1a7 | |
parent | 2a1f7c0cbddae68a34096d056fc7ff7b171bab17 (diff) | |
download | linux-stable-314bfa473b6b6d3efe68011899bd718b349f29d7.tar.gz linux-stable-314bfa473b6b6d3efe68011899bd718b349f29d7.tar.bz2 linux-stable-314bfa473b6b6d3efe68011899bd718b349f29d7.zip |
btrfs: lzo: Add header length check to avoid potential out-of-bounds access
James Harvey reported that some corrupted compressed extent data can
lead to various kernel memory corruption.
Such corrupted extent data belongs to inode with NODATASUM flags, thus
data csum won't help us detecting such bug.
If lucky enough, KASAN could catch it like:
BUG: KASAN: slab-out-of-bounds in lzo_decompress_bio+0x384/0x7a0 [btrfs]
Write of size 4096 at addr ffff8800606cb0f8 by task kworker/u16:0/2338
CPU: 3 PID: 2338 Comm: kworker/u16:0 Tainted: G O 4.17.0-rc5-custom+ #50
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
Workqueue: btrfs-endio btrfs_endio_helper [btrfs]
Call Trace:
dump_stack+0xc2/0x16b
print_address_description+0x6a/0x270
kasan_report+0x260/0x380
memcpy+0x34/0x50
lzo_decompress_bio+0x384/0x7a0 [btrfs]
end_compressed_bio_read+0x99f/0x10b0 [btrfs]
bio_endio+0x32e/0x640
normal_work_helper+0x15a/0xea0 [btrfs]
process_one_work+0x7e3/0x1470
worker_thread+0x1b0/0x1170
kthread+0x2db/0x390
ret_from_fork+0x22/0x40
...
The offending compressed data has the following info:
Header: length 32768 (looks completely valid)
Segment 0 Header: length 3472882419 (obviously out of bounds)
Then when handling segment 0, since it's over the current page, we need
the copy the compressed data to temporary buffer in workspace, then such
large size would trigger out-of-bounds memory access, screwing up the
whole kernel.
Fix it by adding extra checks on header and segment headers to ensure we
won't access out-of-bounds, and even checks the decompressed data won't
be out-of-bounds.
Reported-by: James Harvey <jamespharvey20@gmail.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ updated comments ]
Signed-off-by: David Sterba <dsterba@suse.com>
-rw-r--r-- | fs/btrfs/lzo.c | 28 |
1 files changed, 26 insertions, 2 deletions
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c index 9591927022ba..464fbb96af1a 100644 --- a/fs/btrfs/lzo.c +++ b/fs/btrfs/lzo.c @@ -295,6 +295,7 @@ static int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) unsigned long working_bytes; size_t in_len; size_t out_len; + const size_t max_segment_len = lzo1x_worst_compress(PAGE_SIZE); unsigned long in_offset; unsigned long in_page_bytes_left; unsigned long tot_in; @@ -308,10 +309,22 @@ static int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) data_in = kmap(pages_in[0]); tot_len = read_compress_length(data_in); + /* + * Compressed data header check. + * + * The real compressed size can't exceed the maximum extent length, and + * all pages should be used (whole unused page with just the segment + * header is not possible). If this happens it means the compressed + * extent is corrupted. + */ + if (tot_len > min_t(size_t, BTRFS_MAX_COMPRESSED, srclen) || + tot_len < srclen - PAGE_SIZE) { + ret = -EUCLEAN; + goto done; + } tot_in = LZO_LEN; in_offset = LZO_LEN; - tot_len = min_t(size_t, srclen, tot_len); in_page_bytes_left = PAGE_SIZE - LZO_LEN; tot_out = 0; @@ -322,6 +335,17 @@ static int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb) in_offset += LZO_LEN; tot_in += LZO_LEN; + /* + * Segment header check. + * + * The segment length must not exceed the maximum LZO + * compression size, nor the total compressed size. + */ + if (in_len > max_segment_len || tot_in + in_len > tot_len) { + ret = -EUCLEAN; + goto done; + } + tot_in += in_len; working_bytes = in_len; may_late_unmap = need_unmap = false; @@ -372,7 +396,7 @@ cont: } } - out_len = lzo1x_worst_compress(PAGE_SIZE); + out_len = max_segment_len; ret = lzo1x_decompress_safe(buf, in_len, workspace->buf, &out_len); if (need_unmap) |