diff options
author | Sultan Alsawaf <sultan@kerneltoast.com> | 2022-05-13 15:11:26 -0700 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2022-05-13 15:11:26 -0700 |
commit | 2505a981114dcb715f8977b8433f7540854851d8 (patch) | |
tree | 4913158c36d00dadabb3211577f56f4d3c06ef50 /virt | |
parent | 60a60e32cf91169840abcb4a80f0b0df31708ba7 (diff) | |
download | linux-stable-2505a981114dcb715f8977b8433f7540854851d8.tar.gz linux-stable-2505a981114dcb715f8977b8433f7540854851d8.tar.bz2 linux-stable-2505a981114dcb715f8977b8433f7540854851d8.zip |
zsmalloc: fix races between asynchronous zspage free and page migration
The asynchronous zspage free worker tries to lock a zspage's entire page
list without defending against page migration. Since pages which haven't
yet been locked can concurrently migrate off the zspage page list while
lock_zspage() churns away, lock_zspage() can suffer from a few different
lethal races.
It can lock a page which no longer belongs to the zspage and unsafely
dereference page_private(), it can unsafely dereference a torn pointer to
the next page (since there's a data race), and it can observe a spurious
NULL pointer to the next page and thus not lock all of the zspage's pages
(since a single page migration will reconstruct the entire page list, and
create_page_chain() unconditionally zeroes out each list pointer in the
process).
Fix the races by using migrate_read_lock() in lock_zspage() to synchronize
with page migration.
Link: https://lkml.kernel.org/r/20220509024703.243847-1-sultan@kerneltoast.com
Fixes: 77ff465799c602 ("zsmalloc: zs_page_migrate: skip unnecessary loops but not return -EBUSY if zspage is not inuse")
Signed-off-by: Sultan Alsawaf <sultan@kerneltoast.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'virt')
0 files changed, 0 insertions, 0 deletions