| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
getline() returns -1 at EOF as well as on error. It also doesn't set
errno to 0 on success, so initialize it to 0 before using errno to check
for an error condition. See the paragraph here [1]:
For some system calls and library functions (e.g., getpriority(2)),
-1 is a valid return on success. In such cases, a successful return
can be distinguished from an error return by setting errno to zero
before the call, and then, if the call returns a status that indicates
that an error may have occurred, checking to see if errno has a
nonzero value.
Bear has a bug [2] that launches processes with errno set and causes the
following build failure:
$ bear -- make LLVM=1
...
LD .tmp_vmlinux.kallsyms1
NM .tmp_vmlinux.kallsyms1.syms
KSYMS .tmp_vmlinux.kallsyms1.S
read_symbol: Invalid argument
[1]: https://linux.die.net/man/3/errno
[2]: https://github.com/rizsotto/Bear/issues/469
Fixes: 1c975da56a6f ("scripts/kallsyms: remove KSYM_NAME_LEN_BUFFER")
Reviewed-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: James Clark <james.clark@arm.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 6eb4bd92c1ce ("kallsyms: strip LTO suffixes from static functions")
stripped all function/variable suffixes started with '.' regardless
of whether those suffixes are generated at LTO mode or not. In fact,
as far as I know, in LTO mode, when a static function/variable is
promoted to the global scope, '.llvm.<...>' suffix is added.
The existing mechanism breaks live patch for a LTO kernel even if
no <symbol>.llvm.<...> symbols are involved. For example, for the following
kernel symbols:
$ grep bpf_verifier_vlog /proc/kallsyms
ffffffff81549f60 t bpf_verifier_vlog
ffffffff8268b430 d bpf_verifier_vlog._entry
ffffffff8282a958 d bpf_verifier_vlog._entry_ptr
ffffffff82e12a1f d bpf_verifier_vlog.__already_done
'bpf_verifier_vlog' is a static function. '_entry', '_entry_ptr' and
'__already_done' are static variables used inside 'bpf_verifier_vlog',
so llvm promotes them to file-level static with prefix 'bpf_verifier_vlog.'.
Note that the func-level to file-level static function promotion also
happens without LTO.
Given a symbol name 'bpf_verifier_vlog', with LTO kernel, current mechanism will
return 4 symbols to live patch subsystem which current live patching
subsystem cannot handle it. With non-LTO kernel, only one symbol
is returned.
In [1], we have a lengthy discussion, the suggestion is to separate two
cases:
(1). new symbols with suffix which are generated regardless of whether
LTO is enabled or not, and
(2). new symbols with suffix generated only when LTO is enabled.
The cleanup_symbol_name() should only remove suffixes for case (2).
Case (1) should not be changed so it can work uniformly with or without LTO.
This patch removed LTO-only suffix '.llvm.<...>' so live patching and
tracing should work the same way for non-LTO kernel.
The cleanup_symbol_name() in scripts/kallsyms.c is also changed to have the same
filtering pattern so both kernel and kallsyms tool have the same
expectation on the order of symbols.
[1] https://lore.kernel.org/live-patching/20230615170048.2382735-1-song@kernel.org/T/#u
Fixes: 6eb4bd92c1ce ("kallsyms: strip LTO suffixes from static functions")
Reported-by: Song Liu <song@kernel.org>
Signed-off-by: Yonghong Song <yhs@fb.com>
Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Acked-by: Song Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20230628181926.4102448-1-yhs@fb.com
Signed-off-by: Kees Cook <keescook@chromium.org>
|
|
|
|
|
|
|
|
|
| |
You do not need to decide the buffer size statically.
Use getline() to grow the line buffer as needed.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nicolas Schier <n.schier@avm.de>
|
|
|
|
|
|
|
| |
getopt_long() does not modify this.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nicolas Schier <n.schier@avm.de>
|
|
|
|
|
|
|
|
|
| |
Commit 010a0aad39fc ("kallsyms: Correctly sequence symbols when
CONFIG_LTO_CLANG=y") added --lto-clang, and updated the usage()
function, but not the comment. Update it in the same way.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, expand_symbol() is called many times to get the uncompressed
symbol names for sorting, and also for adding comments.
With the output order shuffled in the previous commit, the symbol data
are now written in the following order:
(1) kallsyms_num_syms
(2) kallsyms_names <-- need compressed names
(3) kallsyms_markers
(4) kallsyms_token_table
(5) kallsyms_token_index
(6) kallsyms_addressed / kallsyms_offsets <-- need uncompressed names (for commenting)
(7) kallsyms_relative_base
(8) kallsyms_seq_of_names <-- need uncompressed names (for sorting)
The compressed names are only needed by (2).
Call expand_symbol() between (2) and (3) to restore the original symbol
names. This requires just one expand_symbol() call for each symbol.
Call cleanup_symbol_name() between (7) and (8) instead of during sorting.
It is allowed to overwrite the ->sym field because (8) just outputs the
index instead of the name of each symbol. Again, this requires just one
cleanup_symbol_name() call for each symbol.
This refactoring makes it ~30% faster.
[Before]
$ time scripts/kallsyms --all-symbols --absolute-percpu --base-relative \
.tmp_vmlinux.kallsyms2.syms >/dev/null
real 0m1.027s
user 0m1.010s
sys 0m0.016s
[After]
$ time scripts/kallsyms --all-symbols --absolute-percpu --base-relative \
.tmp_vmlinux.kallsyms2.syms >/dev/null
real 0m0.717s
user 0m0.717s
sys 0m0.000s
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, this tool outputs symbol data in the following order.
(1) kallsyms_addressed / kallsyms_offsets
(2) kallsyms_relative_base
(3) kallsyms_num_syms
(4) kallsyms_names
(5) kallsyms_markers
(6) kallsyms_seq_of_names
(7) kallsyms_token_table
(8) kallsyms_token_index
This commit changes the order as follows:
(1) kallsyms_num_syms
(2) kallsyms_names
(3) kallsyms_markers
(4) kallsyms_token_table
(5) kallsyms_token_index
(6) kallsyms_addressed / kallsyms_offsets
(7) kallsyms_relative_base
(8) kallsyms_seq_of_names
The motivation is to decrease the number of function calls to
expand_symbol() and cleanup_symbol_name().
The compressed names are only required for writing 'kallsyms_names'.
If you do this first, we can restore the original symbol names.
You do not need to repeat the same operation over again.
The actual refactoring will happen in the next commit.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
scripts/kallsyms.c maintains compiler-generated symbols, but we end up
with something similar in scripts/mksysmap to avoid the "Inconsistent
kallsyms data" error. For example, commit c17a2538704f ("mksysmap: Fix
the mismatch of 'L0' symbols in System.map").
They were separately maintained prior to commit 94ff2f63d6a3 ("kbuild:
reuse mksysmap output for kallsyms").
Now that scripts/kallsyms.c parses the output of scripts/mksysmap,
it makes more sense to collect all the ignored patterns to mksysmap.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Drop the symbols generated by scripts/kallsyms itself automatically
instead of maintaining the symbol list manually.
Pass the kallsyms object from the previous kallsyms step (if it exists)
as the third parameter of scripts/mksysmap, which will weed out the
generated symbols from the input to the next kallsyms step.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
| |
The symbol types 'U' and 'N' are already filtered out by the following
line in scripts/mksysmap:
-e ' [aNUw] '
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The assembler output of kallsyms.c is not meant for people to understand,
and is generally not helpful when debugging "Inconsistent kallsyms data"
warnings. I have previously struggled with these, but found it helpful
to list which symbols changed between the first and second pass in the
.tmp_vmlinux.kallsyms*.S files.
As this file is preprocessed, it's possible to add a C-style multiline
comment with the full type/name tuple.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
My randconfig build setup ran into another kallsyms warning:
Inconsistent kallsyms data
Try make KALLSYMS_EXTRA_PASS=1 as a workaround
After adding some debugging code to kallsyms.c, I saw that the recently
added kallsyms_seqs_of_names symbol can sometimes cause the second stage
table to be slightly longer than the first stage, which makes the
build inconsistent.
Add it to the exception table that contains all other kallsyms-generated
symbols.
Fixes: 60443c88f3a8 ("kallsyms: Improve the performance of kallsyms_lookup_name()")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
Pull char/misc driver updates from Greg KH:
"Here is the large set of char/misc and other driver subsystem changes
for 6.2-rc1. Nothing earth-shattering in here at all, just a lot of
new driver development and minor fixes.
Highlights include:
- fastrpc driver updates
- iio new drivers and updates
- habanalabs driver updates for new hardware and features
- slimbus driver updates
- speakup module parameters added to aid in boot time configuration
- i2c probe_new conversions for lots of different drivers
- other small driver fixes and additions
One semi-interesting change in here is the increase of the number of
misc dynamic minors available to 1048448 to handle new huge-cpu
systems.
All of these have been in linux-next for a while with no reported
problems"
* tag 'char-misc-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (521 commits)
extcon: usbc-tusb320: Convert to i2c's .probe_new()
extcon: rt8973: Convert to i2c's .probe_new()
extcon: fsa9480: Convert to i2c's .probe_new()
extcon: max77843: Replace irqchip mask_invert with unmask_base
chardev: fix error handling in cdev_device_add()
mcb: mcb-parse: fix error handing in chameleon_parse_gdd()
drivers: mcb: fix resource leak in mcb_probe()
coresight: etm4x: fix repeated words in comments
coresight: cti: Fix null pointer error on CTI init before ETM
coresight: trbe: remove cpuhp instance node before remove cpuhp state
counter: stm32-lptimer-cnt: fix the check on arr and cmp registers update
misc: fastrpc: Add dma_mask to fastrpc_channel_ctx
misc: fastrpc: Add mmap request assigning for static PD pool
misc: fastrpc: Safekeep mmaps on interrupted invoke
misc: fastrpc: Add support for audiopd
misc: fastrpc: Rework fastrpc_req_munmap
misc: fastrpc: Use fastrpc_map_put in fastrpc_map_create on fail
misc: fastrpc: Add fastrpc_remote_heap_alloc
misc: fastrpc: Add reserved mem support
misc: fastrpc: Rename audio protection domain to root
...
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The comment in scripts/kallsyms.c describing the usage of
scripts/kallsyms does not reflect the latest implementation.
Fix the comment to be equivalent to what the usage() function prints.
Signed-off-by: Yuma Ueda <cyan@0x00a1e9.dev>
Reviewed-by: Miguel Ojeda <ojeda@kernel.org>
Link: https://lore.kernel.org/r/20221118133631.4554-1-cyan@0x00a1e9.dev
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
kallsyms_seqs_of_names[] records the symbol index sorted by address, the
maximum value in kallsyms_seqs_of_names[] is the number of symbols. And
2^24 = 16777216, which means that three bytes are enough to store the
index. This can help us save (1 * kallsyms_num_syms) bytes of memory.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
LLVM appends various suffixes for local functions and variables, suffixes
observed:
- foo.llvm.[0-9a-f]+
- foo.[0-9a-f]+
Therefore, when CONFIG_LTO_CLANG=y, kallsyms_lookup_name() needs to
truncate the suffix of the symbol name before comparing the local function
or variable name.
Old implementation code:
- if (strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
- if (cleanup_symbol_name(namebuf) && strcmp(namebuf, name) == 0)
- return kallsyms_sym_address(i);
The preceding process is traversed by address from low to high. That is,
for those with the same name after the suffix is removed, the one with
the smallest address is returned first. Therefore, when sorting in the
tool, if the raw names are the same, they should be sorted by address in
ascending order.
ASCII[.] = 2e
ASCII[0-9] = 30,39
ASCII[A-Z] = 41,5a
ASCII[_] = 5f
ASCII[a-z] = 61,7a
According to the preceding ASCII code values, the following sorting result
is strictly followed.
---------------------------------
| main-key | sub-key |
|---------------------------------|
| | addr_lowest |
| <name> | ... |
| <name>.<suffix> | ... |
| | addr_highest |
|---------------------------------|
| <name>?<others> | | //? is [_A-Za-z0-9]
---------------------------------
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, to search for a symbol, we need to expand the symbols in
'kallsyms_names' one by one, and then use the expanded string for
comparison. It's O(n).
If we sort names in ascending order like addresses, we can also use
binary search. It's O(log(n)).
In order not to change the implementation of "/proc/kallsyms", the table
kallsyms_names[] is still stored in a one-to-one correspondence with the
address in ascending order.
Add array kallsyms_seqs_of_names[], it's indexed by the sequence number
of the sorted names, and the corresponding content is the sequence number
of the sorted addresses. For example:
Assume that the index of NameX in array kallsyms_seqs_of_names[] is 'i',
the content of kallsyms_seqs_of_names[i] is 'k', then the corresponding
address of NameX is kallsyms_addresses[k]. The offset in kallsyms_names[]
is get_symbol_offset(k).
Note that the memory usage will increase by (4 * kallsyms_num_syms)
bytes, the next two patches will reduce (1 * kallsyms_num_syms) bytes
and properly handle the case CONFIG_LTO_CLANG=y.
Performance test results: (x86)
Before:
min=234, max=10364402, avg=5206926
min=267, max=11168517, avg=5207587
After:
min=1016, max=90894, avg=7272
min=1014, max=93470, avg=7293
The average lookup performance of kallsyms_lookup_name() improved 715x.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Except for the function build_initial_tok_table(), no token abbreviation
is used elsewhere.
$ cat scripts/kallsyms.c | grep tok | wc -l
33
$ cat scripts/kallsyms.c | grep token | wc -l
31
Here, it would be clearer to use the full name.
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild updates from Masahiro Yamada:
- Remove potentially incomplete targets when Kbuid is interrupted by
SIGINT etc in case GNU Make may miss to do that when stderr is piped
to another program.
- Rewrite the single target build so it works more correctly.
- Fix rpm-pkg builds with V=1.
- List top-level subdirectories in ./Kbuild.
- Ignore auto-generated __kstrtab_* and __kstrtabns_* symbols in
kallsyms.
- Avoid two different modules in lib/zstd/ having shared code, which
potentially causes building the common code as build-in and modular
back-and-forth.
- Unify two modpost invocations to optimize the build process.
- Remove head-y syntax in favor of linker scripts for placing
particular sections in the head of vmlinux.
- Bump the minimal GNU Make version to 3.82.
- Clean up misc Makefiles and scripts.
* tag 'kbuild-v6.1' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (41 commits)
docs: bump minimal GNU Make version to 3.82
ia64: simplify esi object addition in Makefile
Revert "kbuild: Check if linker supports the -X option"
kbuild: rebuild .vmlinux.export.o when its prerequisite is updated
kbuild: move modules.builtin(.modinfo) rules to Makefile.vmlinux_o
zstd: Fixing mixed module-builtin objects
kallsyms: ignore __kstrtab_* and __kstrtabns_* symbols
kallsyms: take the input file instead of reading stdin
kallsyms: drop duplicated ignore patterns from kallsyms.c
kbuild: reuse mksysmap output for kallsyms
mksysmap: update comment about __crc_*
kbuild: remove head-y syntax
kbuild: use obj-y instead extra-y for objects placed at the head
kbuild: hide error checker logs for V=1 builds
kbuild: re-run modpost when it is updated
kbuild: unify two modpost invocations
kbuild: move vmlinux.o rule to the top Makefile
kbuild: move .vmlinux.objs rule to Makefile.modpost
kbuild: list sub-directories in ./Kbuild
Makefile.compiler: replace cc-ifversion with compiler-specific macros
...
|
| |
| |
| |
| |
| |
| |
| |
| | |
This gets rid of the pipe operator connected with 'cat'.
Also use getopt_long() to parse the command line.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Now that kallsyms.c parses the output from mksysmap, some symbols have
already been dropped.
Move comments to scripts/mksysmap. Also, make the grep command readable.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
Pull kcfi updates from Kees Cook:
"This replaces the prior support for Clang's standard Control Flow
Integrity (CFI) instrumentation, which has required a lot of special
conditions (e.g. LTO) and work-arounds.
The new implementation ("Kernel CFI") is specific to C, directly
designed for the Linux kernel, and takes advantage of architectural
features like x86's IBT. This series retains arm64 support and adds
x86 support.
GCC support is expected in the future[1], and additional "generic"
architectural support is expected soon[2].
Summary:
- treewide: Remove old CFI support details
- arm64: Replace Clang CFI support with Clang KCFI support
- x86: Introduce Clang KCFI support"
Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107048 [1]
Link: https://github.com/samitolvanen/llvm-project/commits/kcfi_generic [2]
* tag 'kcfi-v6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: (22 commits)
x86: Add support for CONFIG_CFI_CLANG
x86/purgatory: Disable CFI
x86: Add types to indirectly called assembly functions
x86/tools/relocs: Ignore __kcfi_typeid_ relocations
kallsyms: Drop CONFIG_CFI_CLANG workarounds
objtool: Disable CFI warnings
objtool: Preserve special st_shndx indexes in elf_update_symbol
treewide: Drop __cficanonical
treewide: Drop WARN_ON_FUNCTION_MISMATCH
treewide: Drop function_nocfi
init: Drop __nocfi from __init
arm64: Drop unneeded __nocfi attributes
arm64: Add CFI error handling
arm64: Add types to indirect called assembly functions
psci: Fix the function type for psci_initcall_t
lkdtm: Emit an indirect call for CFI tests
cfi: Add type helper macros
cfi: Switch to -fsanitize=kcfi
cfi: Drop __CFI_ADDRESSABLE
cfi: Remove CONFIG_CFI_CLANG_SHADOW
...
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The compiler generates __kcfi_typeid_ symbols for annotating assembly
functions with type information. These are constants that can be
referenced in assembly code and are resolved by the linker. Ignore
them in kallsyms.
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20220908215504.3686827-3-samitolvanen@google.com
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Rust symbols can become quite long due to namespacing introduced
by modules, types, traits, generics, etc. For instance,
the following code:
pub mod my_module {
pub struct MyType;
pub struct MyGenericType<T>(T);
pub trait MyTrait {
fn my_method() -> u32;
}
impl MyTrait for MyGenericType<MyType> {
fn my_method() -> u32 {
42
}
}
}
generates a symbol of length 96 when using the upcoming v0 mangling scheme:
_RNvXNtCshGpAVYOtgW1_7example9my_moduleINtB2_13MyGenericTypeNtB2_6MyTypeENtB2_7MyTrait9my_method
At the moment, Rust symbols may reach up to 300 in length.
Setting 512 as the maximum seems like a reasonable choice to
keep some headroom.
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Petr Mladek <pmladek@suse.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Co-developed-by: Alex Gaynor <alex.gaynor@gmail.com>
Signed-off-by: Alex Gaynor <alex.gaynor@gmail.com>
Co-developed-by: Wedson Almeida Filho <wedsonaf@google.com>
Signed-off-by: Wedson Almeida Filho <wedsonaf@google.com>
Co-developed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Gary Guo <gary@garyguo.net>
Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Rust symbols can become quite long due to namespacing introduced
by modules, types, traits, generics, etc.
Increasing to 255 is not enough in some cases, therefore
introduce longer lengths to the symbol table.
In order to avoid increasing all lengths to 2 bytes (since most
of them are small, including many Rust ones), use ULEB128 to
keep smaller symbols in 1 byte, with the rest in 2 bytes.
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Co-developed-by: Alex Gaynor <alex.gaynor@gmail.com>
Signed-off-by: Alex Gaynor <alex.gaynor@gmail.com>
Co-developed-by: Wedson Almeida Filho <wedsonaf@google.com>
Signed-off-by: Wedson Almeida Filho <wedsonaf@google.com>
Co-developed-by: Gary Guo <gary@garyguo.net>
Signed-off-by: Gary Guo <gary@garyguo.net>
Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Co-developed-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This adds a static assert to ensure `KSYM_NAME_LEN_BUFFER`
gets updated when `KSYM_NAME_LEN` changes.
The relationship used is one that keeps the new size (512+1)
close to the original buffer size (500).
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Co-developed-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This introduces `KSYM_NAME_LEN_BUFFER` in place of the previously
hardcoded size of the input buffer.
It will also make it easier to update the size in a single place
in a later patch.
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Co-developed-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
|
|/
|
|
|
|
|
|
|
|
|
| |
This removes one place where the `500` constant is hardcoded.
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Geert Stappers <stappers@stappers.nl>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Co-developed-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
|
|
|
|
|
|
|
|
| |
The kallsyms program supports --absolute-percpu option but does not display
it in the usage message, fix it.
Signed-off-by: Yuntao Wang <ytcoode@gmail.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reintroduce the __kvm_nvhe_ symbols in kallsyms, ignoring the local
symbols in this namespace. The local symbols are not informative and
can cause aliasing issues when symbolizing the addresses.
With the necessary symbols now in kallsyms we can symbolize nVHE
addresses using the %p print format specifier:
[ 98.916444][ T426] kvm [426]: nVHE hyp panic at: [<ffffffc0096156fc>] __kvm_nvhe_overflow_stack+0x8/0x34!
Signed-off-by: Kalesh Singh <kaleshsingh@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220420214317.3303360-7-kaleshsingh@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The llvm compiler can generate lots of local labels ('.LBB', '.Ltmpxxx',
'.L__unnamed_xx', etc.). These symbols usually are useless for debugging.
And they might overlap with handwritten symbols.
Before this change, a dumpstack shows a local symbol for epc:
[ 0.040341][ T0] Hardware name: riscv-virtio,qemu (DT)
[ 0.040376][ T0] epc : .LBB6_14+0x22/0x6a
[ 0.040452][ T0] ra : restore_all+0x12/0x6e
The simple solution is that we can ignore all local labels prefixed by '.L'.
For handwritten symbols which need to be preserved should drop the '.L'
prefix.
After this change, the C defined symbol is shown so we can locate the
problematical code immediately:
[ 0.035795][ T0] Hardware name: riscv-virtio,qemu (DT)
[ 0.036332][ T0] epc : trace_hardirqs_on+0x54/0x13c
[ 0.036567][ T0] ra : restore_all+0x12/0x6e
Signed-off-by: Changbin Du <changbin.du@gmail.com>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ARM randconfig builds with lld sometimes show a build failure
from kallsyms:
Inconsistent kallsyms data
Try make KALLSYMS_EXTRA_PASS=1 as a workaround
The problem is the veneers/thunks getting added by the linker extend
the symbol table, which in turn leads to more veneers being needed,
so it may take a few extra iterations to converge.
This bug has been fixed multiple times before, but comes back every time
a new symbol name is used. lld uses a different set of identifiers from
ld.bfd, so the additional ones need to be added as well.
I looked through the sources and found that arm64 and mips define similar
prefixes, so I'm adding those as well, aside from the ones I observed. I'm
not sure about powerpc64, which seems to already be handled through a
section match, but if it comes back, the "__long_branch_" and "__plt_"
prefixes would have to get added as well.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PowerPC allmodconfig often fails to build as follows:
LD .tmp_vmlinux.kallsyms1
KSYM .tmp_vmlinux.kallsyms1.o
LD .tmp_vmlinux.kallsyms2
KSYM .tmp_vmlinux.kallsyms2.o
LD .tmp_vmlinux.kallsyms3
KSYM .tmp_vmlinux.kallsyms3.o
LD vmlinux
SORTTAB vmlinux
SYSMAP System.map
Inconsistent kallsyms data
Try make KALLSYMS_EXTRA_PASS=1 as a workaround
make[2]: *** [../Makefile:1162: vmlinux] Error 1
Setting KALLSYMS_EXTRA_PASS=1 does not help.
This is caused by the compiler inserting stubs such as *.long_branch.*
and *.plt_branch.*
$ powerpc-linux-nm -n .tmp_vmlinux.kallsyms2
[ snip ]
c00000000210c010 t 00000075.plt_branch.da9:19
c00000000210c020 t 00000075.plt_branch.1677:5
c00000000210c030 t 00000075.long_branch.memmove
c00000000210c034 t 00000075.plt_branch.9e0:5
c00000000210c044 t 00000075.plt_branch.free_initrd_mem
...
Actually, the problem mentioned in scripts/link-vmlinux.sh comments;
"In theory it's possible this results in even more stubs, but unlikely"
is happening here, and ends up with another kallsyms step required.
scripts/kallsyms.c already ignores various compiler stubs. Let's do
similar to make kallsysms for PowerPC always succeed in 2 steps.
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Tested-by: Guenter Roeck <linux@roeck-us.net>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add new folders arch/arm64/kvm/hyp/{vhe,nvhe} and Makefiles for building code
that runs in EL2 under VHE/nVHE KVM, repsectivelly. Add an include folder for
hyp-specific header files which will include code common to VHE/nVHE.
Build nVHE code with -D__KVM_NVHE_HYPERVISOR__, VHE code with
-D__KVM_VHE_HYPERVISOR__.
Under nVHE compile each source file into a `.hyp.tmp.o` object first, then
prefix all its symbols with "__kvm_nvhe_" using `objcopy` and produce
a `.hyp.o`. Suffixes were chosen so that it would be possible for VHE and nVHE
to share some source files, but compiled with different CFLAGS.
The nVHE ELF symbol prefix is added to kallsyms.c as ignored. EL2-only symbols
will never appear in EL1 stack traces.
Due to symbol prefixing, add a section in image-vars.h for aliases of symbols
that are defined in nVHE EL2 and accessed by kernel in EL1 or vice versa.
Signed-off-by: David Brazdil <dbrazdil@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20200625131420.71444-4-dbrazdil@google.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Due to a bug-report that was compiler-dependent, I updated one of my
machines to gcc-10. That shows a lot of new warnings. Happily they
seem to be mostly the valid kind, but it's going to cause a round of
churn for getting rid of them..
This is the really low-hanging fruit of removing a couple of zero-sized
arrays in some core code. We have had a round of these patches before,
and we'll have many more coming, and there is nothing special about
these except that they were particularly trivial, and triggered more
warnings than most.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is the code in the read_symbol function in 'scripts/kallsyms.c':
if (is_ignored_symbol(name, type))
return NULL;
/* Ignore most absolute/undefined (?) symbols. */
if (strcmp(name, "_text") == 0)
_text = addr;
But the is_ignored_symbol function returns true for name="_text" and
type='A'. So the next condition is not executed and the _text variable
is always zero.
It makes the wrong kallsyms_relative_base symbol as a result of the code
(CONFIG_KALLSYMS_BASE_RELATIVE is defined):
if (base_relative) {
output_label("kallsyms_relative_base");
output_address(relative_base);
printf("\n");
}
Because the output_address function uses the _text variable.
So the kallsyms_lookup function and all related functions in the kernel
do not work properly. For example, the stack trace in oops:
Call Trace:
[aa095e58] [809feab8] kobj_ns_ops_tbl+0x7ff09ac8/0x7ff1c1c4 (unreliable)
[aa095e98] [80002b64] kobj_ns_ops_tbl+0x7f50db74/0x80000010
[aa095ef8] [809c3d24] kobj_ns_ops_tbl+0x7feced34/0x7ff1c1c4
[aa095f28] [80002ed0] kobj_ns_ops_tbl+0x7f50dee0/0x80000010
[aa095f38] [8000f238] kobj_ns_ops_tbl+0x7f51a248/0x80000010
The right stack trace:
Call Trace:
[aa095e58] [809feab8] module_vdu_video_init+0x2fc/0x3bc (unreliable)
[aa095e98] [80002b64] do_one_initcall+0x40/0x1f0
[aa095ef8] [809c3d24] kernel_init_freeable+0x164/0x1d8
[aa095f28] [80002ed0] kernel_init+0x14/0x124
[aa095f38] [8000f238] ret_from_kernel_thread+0x14/0x1c
[masahiroy@kernel.org:
This issue happens on binutils <= 2.22
The following commit fixed it:
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=d2667025dd30611514810c28bee9709e4623012a
The symbol type of _text is 'T' on binutils >= 2.23
The minimal supported binutils version for the kernel build is 2.21
]
Signed-off-by: Mikhail Petrov <Mikhail.Petrov@mir.dev>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
| |
memcpy() writes one more byte than allocated.
Fixes: 8d60526999aa ("scripts/kallsyms: change table to store (strcut sym_entry *)")
Reported-by: youling257 <youling257@gmail.com>
Reported-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Tested-by: Pavel Machek <pavel@ucw.cz>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The symbol table is extended every 10000 addition by using realloc(),
where data copy might occur to the new buffer.
To decrease the amount of possible data copy, let's change the table
to store the pointer.
The symbol type + symbol name part is appended at the end of
(struct sym_entry), and allocated together with the struct body.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
| |
I will use 'sym' for the point to struce sym_entry in the next commit.
Rename 'sym', 'stype' to 'name', 'type', which are more intuitive.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 5e5c4fa78745 ("scripts/kallsyms: shrink table before
sorting it"), kallsyms_relative_base can be larger than _text, which
causes overflow when building the 32-bit kernel.
https://lkml.org/lkml/2019/12/7/156
This is because _text is, unless --all-symbols is specified, now
trimmed from the symbol table before record_relative_base() is called.
Handle the offset signedness also for kallsyms_relative_base. Introduce
a new helper, output_address(), to reduce the code duplication.
Fixes: 5e5c4fa78745 ("scripts/kallsyms: shrink table before sorting it")
Reported-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
|
|
|
|
|
|
| |
These are set to zero without the explicit initializers.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
|
|
|
|
| |
Put the relevant code close together.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
|
|
|
|
|
| |
There is no more reason to check the return value of
check_symbol_range().
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
|
|
|
|
| |
Collect the ignored patterns to is_ignored_symbol().
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
|
|
|
|
| |
Refactoring for shortening the code.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
|
|
|
|
|
| |
Unless the address range matters, symbols can be ignored earlier,
which avoids unneeded memory allocation.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
|
|
|
|
| |
Add 'const' where a function does not write to the pointer dereferenes.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
|
|
|
|
|
| |
The callers of this function expect (unsigned char *). I do not see
a good reason to make this function return (void *).
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
|
|
|
|
|
| |
You can do equivalent things with strspn(). I do not see noticeable
performance difference.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sym_entry::sym is (unsigned char *) instead of (char *) because
kallsyms exploits the MSB for compression, and the characters are
used as the index of token_profit array.
However, it requires casting (unsigned char *) to (char *) in some
places since standard library functions such as strcmp(), strlen()
expect (char *).
Introduce a new helper, sym_name(), which advances the given pointer
by 1 and casts it to (char *).
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|