summaryrefslogtreecommitdiffstats
path: root/mm
diff options
context:
space:
mode:
authorOleg Nesterov <oleg@redhat.com>2019-07-04 15:14:49 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-07-10 09:52:29 +0200
commitd82bf47aa320a0df39df27e63e1e319c429b572e (patch)
tree6c5cac911efecdbadf0c13b2217b01152c39cdb1 /mm
parent61ff807f0dd9e1daa3aa75ade32c102dedbcffd9 (diff)
downloadlinux-stable-d82bf47aa320a0df39df27e63e1e319c429b572e.tar.gz
linux-stable-d82bf47aa320a0df39df27e63e1e319c429b572e.tar.bz2
linux-stable-d82bf47aa320a0df39df27e63e1e319c429b572e.zip
swap_readpage(): avoid blk_wake_io_task() if !synchronous
commit 8751853091998cd31e9e5f1e8206280155af8921 upstream. swap_readpage() sets waiter = bio->bi_private even if synchronous = F, this means that the caller can get the spurious wakeup after return. This can be fatal if blk_wake_io_task() does set_current_state(TASK_RUNNING) after the caller does set_special_state(), in the worst case the kernel can crash in do_task_dead(). Link: http://lkml.kernel.org/r/20190704160301.GA5956@redhat.com Fixes: 0619317ff8baa2d ("block: add polled wakeup task helper") Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reported-by: Qian Cai <cai@lca.pw> Acked-by: Hugh Dickins <hughd@google.com> Reviewed-by: Jens Axboe <axboe@kernel.dk> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/page_io.c13
1 files changed, 8 insertions, 5 deletions
diff --git a/mm/page_io.c b/mm/page_io.c
index 189415852077..a39aac2f8c8d 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -137,8 +137,10 @@ out:
unlock_page(page);
WRITE_ONCE(bio->bi_private, NULL);
bio_put(bio);
- blk_wake_io_task(waiter);
- put_task_struct(waiter);
+ if (waiter) {
+ blk_wake_io_task(waiter);
+ put_task_struct(waiter);
+ }
}
int generic_swapfile_activate(struct swap_info_struct *sis,
@@ -395,11 +397,12 @@ int swap_readpage(struct page *page, bool synchronous)
* Keep this task valid during swap readpage because the oom killer may
* attempt to access it in the page fault retry time check.
*/
- get_task_struct(current);
- bio->bi_private = current;
bio_set_op_attrs(bio, REQ_OP_READ, 0);
- if (synchronous)
+ if (synchronous) {
bio->bi_opf |= REQ_HIPRI;
+ get_task_struct(current);
+ bio->bi_private = current;
+ }
count_vm_event(PSWPIN);
bio_get(bio);
qc = submit_bio(bio);