summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorRafael J. Wysocki <rjw@sisk.pl>2011-07-06 20:15:23 +0200
committerRafael J. Wysocki <rjw@sisk.pl>2011-07-06 20:15:23 +0200
commit4d4cf23cdde2f8f9324f5684a7f349e182039529 (patch)
treef84a26d15f1112cc7c452d1fdac3bd70857774e0 /kernel
parenta2fa83faf47b514ab947cea916d3691b66525073 (diff)
downloadlinux-stable-4d4cf23cdde2f8f9324f5684a7f349e182039529.tar.gz
linux-stable-4d4cf23cdde2f8f9324f5684a7f349e182039529.tar.bz2
linux-stable-4d4cf23cdde2f8f9324f5684a7f349e182039529.zip
PM / Hibernate: Fix free_unnecessary_pages()
There is a bug in free_unnecessary_pages() that causes it to attempt to free too many pages in some cases, which triggers the BUG_ON() in memory_bm_clear_bit() for copy_bm. Namely, if count_data_pages() is initially greater than alloc_normal, we get to_free_normal equal to 0 and "save" greater from 0. In that case, if the sum of "save" and count_highmem_pages() is greater than alloc_highmem, we subtract a positive number from to_free_normal. Hence, since to_free_normal was 0 before the subtraction and is an unsigned int, the result is converted to a huge positive number that is used as the number of pages to free. Fix this bug by checking if to_free_normal is actually greater than or equal to the number we're going to subtract from it. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Reported-and-tested-by: Matthew Garrett <mjg@redhat.com> Cc: stable@kernel.org
Diffstat (limited to 'kernel')
-rw-r--r--kernel/power/snapshot.c6
1 files changed, 5 insertions, 1 deletions
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index ace55889f702..06efa54f93d6 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1211,7 +1211,11 @@ static void free_unnecessary_pages(void)
to_free_highmem = alloc_highmem - save;
} else {
to_free_highmem = 0;
- to_free_normal -= save - alloc_highmem;
+ save -= alloc_highmem;
+ if (to_free_normal > save)
+ to_free_normal -= save;
+ else
+ to_free_normal = 0;
}
memory_bm_position_reset(&copy_bm);