diff options
author | Mel Gorman <mgorman@suse.de> | 2011-10-17 16:38:20 +0200 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2011-10-18 14:01:24 -0700 |
commit | 2bbcb8788311a40714b585fc11b51da6ffa2ab92 (patch) | |
tree | 05981b0261c3fe7d8ebe6e64a66ed71954b19997 /lib | |
parent | de0ed36a3ecc0b51da4f16fa0af47ba6b7ffad22 (diff) | |
download | linux-2bbcb8788311a40714b585fc11b51da6ffa2ab92.tar.gz linux-2bbcb8788311a40714b585fc11b51da6ffa2ab92.tar.bz2 linux-2bbcb8788311a40714b585fc11b51da6ffa2ab92.zip |
mm: memory hotplug: Check if pages are correctly reserved on a per-section basis
(Resending as I am not seeing it in -next so maybe it got lost)
mm: memory hotplug: Check if pages are correctly reserved on a per-section basis
It is expected that memory being brought online is PageReserved
similar to what happens when the page allocator is being brought up.
Memory is onlined in "memory blocks" which consist of one or more
sections. Unfortunately, the code that verifies PageReserved is
currently assuming that the memmap backing all these pages is virtually
contiguous which is only the case when CONFIG_SPARSEMEM_VMEMMAP is set.
As a result, memory hot-add is failing on those configurations with
the message;
kernel: section number XXX page number 256 not reserved, was it already online?
This patch updates the PageReserved check to lookup struct page once
per section to guarantee the correct struct page is being checked.
[Check pages within sections properly: rientjes@google.com]
[original patch by: nfont@linux.vnet.ibm.com]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Tested-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'lib')
0 files changed, 0 insertions, 0 deletions