diff options
author | Michal Hocko <mhocko@suse.com> | 2017-07-06 15:37:45 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2017-07-06 16:24:32 -0700 |
commit | 1b862aecfbd419cdc4553645bf86d07554279bed (patch) | |
tree | c219675bc522f504b83ffaa522eb2ed4292961b9 /mm/memory_hotplug.c | |
parent | bfe63d3beabfac93521c8b7ccd40befd7a90148e (diff) | |
download | linux-1b862aecfbd419cdc4553645bf86d07554279bed.tar.gz linux-1b862aecfbd419cdc4553645bf86d07554279bed.tar.bz2 linux-1b862aecfbd419cdc4553645bf86d07554279bed.zip |
mm, memory_hotplug: get rid of is_zone_device_section
Device memory hotplug hooks into regular memory hotplug only half way.
It needs memory sections to track struct pages but there is no
need/desire to associate those sections with memory blocks and export
them to the userspace via sysfs because they cannot be onlined anyway.
This is currently expressed by for_device argument to arch_add_memory
which then makes sure to associate the given memory range with
ZONE_DEVICE. register_new_memory then relies on is_zone_device_section
to distinguish special memory hotplug from the regular one. While this
works now, later patches in this series want to move __add_zone outside
of arch_add_memory path so we have to come up with something else.
Add want_memblock down the __add_pages path and use it to control
whether the section->memblock association should be done.
arch_add_memory then just trivially want memblock for everything but
for_device hotplug.
remove_memory_section doesn't need is_zone_device_section either. We
can simply skip all the memblock specific cleanup if there is no
memblock for the given section.
This shouldn't introduce any functional change.
Link: http://lkml.kernel.org/r/20170515085827.16474-5-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Tested-by: Dan Williams <dan.j.williams@intel.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Daniel Kiper <daniel.kiper@oracle.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Joonsoo Kim <js1304@gmail.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Reza Arbab <arbab@linux.vnet.ibm.com>
Cc: Tobias Regnery <tobias.regnery@gmail.com>
Cc: Toshi Kani <toshi.kani@hpe.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Xishi Qiu <qiuxishi@huawei.com>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/memory_hotplug.c')
-rw-r--r-- | mm/memory_hotplug.c | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 6b6362819be2..c0147d3024eb 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -494,7 +494,7 @@ static int __meminit __add_zone(struct zone *zone, unsigned long phys_start_pfn) } static int __meminit __add_section(int nid, struct zone *zone, - unsigned long phys_start_pfn) + unsigned long phys_start_pfn, bool want_memblock) { int ret; @@ -511,6 +511,9 @@ static int __meminit __add_section(int nid, struct zone *zone, if (ret < 0) return ret; + if (!want_memblock) + return 0; + return register_new_memory(nid, __pfn_to_section(phys_start_pfn)); } @@ -521,7 +524,7 @@ static int __meminit __add_section(int nid, struct zone *zone, * add the new pages. */ int __ref __add_pages(int nid, struct zone *zone, unsigned long phys_start_pfn, - unsigned long nr_pages) + unsigned long nr_pages, bool want_memblock) { unsigned long i; int err = 0; @@ -549,7 +552,7 @@ int __ref __add_pages(int nid, struct zone *zone, unsigned long phys_start_pfn, } for (i = start_sec; i <= end_sec; i++) { - err = __add_section(nid, zone, section_nr_to_pfn(i)); + err = __add_section(nid, zone, section_nr_to_pfn(i), want_memblock); /* * EEXIST is finally dealt with by ioresource collision |