diff options
author | Ard Biesheuvel <ardb@kernel.org> | 2020-03-29 16:12:58 +0200 |
---|---|---|
committer | Catalin Marinas <catalin.marinas@arm.com> | 2020-04-01 21:44:43 +0100 |
commit | e16e65a02913d29a7b27c4e3a415ceec967b0629 (patch) | |
tree | 253630d5e431cd1706a14394138c511a6ff523dd /arch/arm64 | |
parent | b8fdef311a0bd9223f10754f94fdcf1a594a3457 (diff) | |
download | linux-e16e65a02913d29a7b27c4e3a415ceec967b0629.tar.gz linux-e16e65a02913d29a7b27c4e3a415ceec967b0629.tar.bz2 linux-e16e65a02913d29a7b27c4e3a415ceec967b0629.zip |
arm64: remove CONFIG_DEBUG_ALIGN_RODATA feature
When CONFIG_DEBUG_ALIGN_RODATA is enabled, kernel segments mapped with
different permissions (r-x for .text, r-- for .rodata, rw- for .data,
etc) are rounded up to 2 MiB so they can be mapped more efficiently.
In particular, it permits the segments to be mapped using level 2
block entries when using 4k pages, which is expected to result in less
TLB pressure.
However, the mappings for the bulk of the kernel will use level 2
entries anyway, and the misaligned fringes are organized such that they
can take advantage of the contiguous bit, and use far fewer level 3
entries than would be needed otherwise.
This makes the value of this feature dubious at best, and since it is not
enabled in defconfig or in the distro configs, it does not appear to be
in wide use either. So let's just remove it.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Acked-by: Laura Abbott <labbott@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64')
-rw-r--r-- | arch/arm64/Kconfig.debug | 13 | ||||
-rw-r--r-- | arch/arm64/include/asm/memory.h | 12 |
2 files changed, 1 insertions, 24 deletions
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug index 1c906d932d6b..a1efa246c9ed 100644 --- a/arch/arm64/Kconfig.debug +++ b/arch/arm64/Kconfig.debug @@ -52,19 +52,6 @@ config DEBUG_WX If in doubt, say "Y". -config DEBUG_ALIGN_RODATA - depends on STRICT_KERNEL_RWX - bool "Align linker sections up to SECTION_SIZE" - help - If this option is enabled, sections that may potentially be marked as - read only or non-executable will be aligned up to the section size of - the kernel. This prevents sections from being split into pages and - avoids a potential TLB penalty. The downside is an increase in - alignment and potentially wasted space. Turn on this option if - performance is more important than memory pressure. - - If in doubt, say N. - config DEBUG_EFI depends on EFI && DEBUG_INFO bool "UEFI debugging" diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 2be67b232499..a1871bb32bb1 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -120,22 +120,12 @@ /* * Alignment of kernel segments (e.g. .text, .data). - */ -#if defined(CONFIG_DEBUG_ALIGN_RODATA) -/* - * 4 KB granule: 1 level 2 entry - * 16 KB granule: 128 level 3 entries, with contiguous bit - * 64 KB granule: 32 level 3 entries, with contiguous bit - */ -#define SEGMENT_ALIGN SZ_2M -#else -/* + * * 4 KB granule: 16 level 3 entries, with contiguous bit * 16 KB granule: 4 level 3 entries, without contiguous bit * 64 KB granule: 1 level 3 entry */ #define SEGMENT_ALIGN SZ_64K -#endif /* * Memory types available. |