diff options
author | Martin Schwidefsky <schwidefsky@de.ibm.com> | 2018-07-31 16:14:18 +0200 |
---|---|---|
committer | Martin Schwidefsky <schwidefsky@de.ibm.com> | 2018-08-01 07:48:33 +0200 |
commit | fb7d7518b0d65955f91c7b875c36eae7694c69bd (patch) | |
tree | dedff29ea5e0555417a8d786686eeed70d301b59 /arch/s390 | |
parent | 8cce437fbb5c1f2af2f63834fa05082596beca5d (diff) | |
download | linux-stable-fb7d7518b0d65955f91c7b875c36eae7694c69bd.tar.gz linux-stable-fb7d7518b0d65955f91c7b875c36eae7694c69bd.tar.bz2 linux-stable-fb7d7518b0d65955f91c7b875c36eae7694c69bd.zip |
s390/numa: move initial setup of node_to_cpumask_map
The numa_init_early initcall sets the node_to_cpumask_map[0] to the
full cpu_possible_mask. Unfortunately this early_initcall is too late,
the NUMA setup for numa=emu is done even earlier. The order of calls
is numa_setup() -> emu_update_cpu_topology(), then the early_initcalls(),
followed by sched_init_domains().
Starting with git commit 051f3ca02e46432c0965e8948f00c07d8a2f09c0
"sched/topology: Introduce NUMA identity node sched domain"
the incorrect node_to_cpumask_map[0] really screws up the domain
setup and the kernel panics with the follow oops:
Cc: <stable@vger.kernel.org> # v4.15+
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'arch/s390')
-rw-r--r-- | arch/s390/numa/numa.c | 16 |
1 files changed, 2 insertions, 14 deletions
diff --git a/arch/s390/numa/numa.c b/arch/s390/numa/numa.c index 06a80434cfe6..5bd374491f94 100644 --- a/arch/s390/numa/numa.c +++ b/arch/s390/numa/numa.c @@ -134,6 +134,8 @@ void __init numa_setup(void) { pr_info("NUMA mode: %s\n", mode->name); nodes_clear(node_possible_map); + /* Initially attach all possible CPUs to node 0. */ + cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask); if (mode->setup) mode->setup(); numa_setup_memory(); @@ -141,20 +143,6 @@ void __init numa_setup(void) } /* - * numa_init_early() - Initialization initcall - * - * This runs when only one CPU is online and before the first - * topology update is called for by the scheduler. - */ -static int __init numa_init_early(void) -{ - /* Attach all possible CPUs to node 0 for now. */ - cpumask_copy(&node_to_cpumask_map[0], cpu_possible_mask); - return 0; -} -early_initcall(numa_init_early); - -/* * numa_init_late() - Initialization initcall * * Register NUMA nodes. |