diff options
author | Thomas Gleixner <tglx@linutronix.de> | 2017-12-23 19:45:11 +0100 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2017-12-23 20:18:42 +0100 |
commit | f6c4fd506cb626e4346aa81688f255e593a7c5a0 (patch) | |
tree | 51356ab92d31c42e817b02ded05fe2dad0d17a81 /arch | |
parent | 613e396bc0d4c7604fba23256644e78454c68cf6 (diff) | |
download | linux-stable-f6c4fd506cb626e4346aa81688f255e593a7c5a0.tar.gz linux-stable-f6c4fd506cb626e4346aa81688f255e593a7c5a0.tar.bz2 linux-stable-f6c4fd506cb626e4346aa81688f255e593a7c5a0.zip |
x86/cpu_entry_area: Prevent wraparound in setup_cpu_entry_area_ptes() on 32bit
The loop which populates the CPU entry area PMDs can wrap around on 32bit
machines when the number of CPUs is small.
It worked wonderful for NR_CPUS=64 for whatever reason and the moron who
wrote that code did not bother to test it with !SMP.
Check for the wraparound to fix it.
Fixes: 92a0f81d8957 ("x86/cpu_entry_area: Move it out of the fixmap")
Reported-by: kernel test robot <fengguang.wu@intel.com>
Signed-off-by: Thomas "Feels stupid" Gleixner <tglx@linutronix.de>
Tested-by: Borislav Petkov <bp@alien8.de>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/mm/cpu_entry_area.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c index 21e8b595cbb1..fe814fd5e014 100644 --- a/arch/x86/mm/cpu_entry_area.c +++ b/arch/x86/mm/cpu_entry_area.c @@ -122,7 +122,8 @@ static __init void setup_cpu_entry_area_ptes(void) start = CPU_ENTRY_AREA_BASE; end = start + CPU_ENTRY_AREA_MAP_SIZE; - for (; start < end; start += PMD_SIZE) + /* Careful here: start + PMD_SIZE might wrap around */ + for (; start < end && start >= CPU_ENTRY_AREA_BASE; start += PMD_SIZE) populate_extra_pte(start); #endif } |