diff options
author | Yang Jihong <yangjihong1@huawei.com> | 2023-02-21 08:49:16 +0900 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2023-03-11 16:26:45 +0100 |
commit | 31f7904bef2b1ba162ee50e730da5f44095e5ac2 (patch) | |
tree | 2e873c9a3350e3760e70ccf8ea79bf81bab0668e /include/linux | |
parent | 23b1a4346f5471630da36c19d4d6e1a6ed2f4fe8 (diff) | |
download | linux-stable-31f7904bef2b1ba162ee50e730da5f44095e5ac2.tar.gz linux-stable-31f7904bef2b1ba162ee50e730da5f44095e5ac2.tar.bz2 linux-stable-31f7904bef2b1ba162ee50e730da5f44095e5ac2.zip |
x86/kprobes: Fix __recover_optprobed_insn check optimizing logic
commit 868a6fc0ca2407622d2833adefe1c4d284766c4c upstream.
Since the following commit:
commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe
may be in the optimizing or unoptimizing state when op.kp->flags
has KPROBE_FLAG_OPTIMIZED and op->list is not empty.
The __recover_optprobed_insn check logic is incorrect, a kprobe in the
unoptimizing state may be incorrectly determined as unoptimizing.
As a result, incorrect instructions are copied.
The optprobe_queued_unopt function needs to be exported for invoking in
arch directory.
Link: https://lore.kernel.org/all/20230216034247.32348-2-yangjihong1@huawei.com/
Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code")
Cc: stable@vger.kernel.org
Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'include/linux')
-rw-r--r-- | include/linux/kprobes.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index 019444c04f58..7ea04300167b 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -353,6 +353,7 @@ extern int proc_kprobes_optimization_handler(struct ctl_table *table, size_t *length, loff_t *ppos); #endif extern void wait_for_kprobe_optimizer(void); +bool optprobe_queued_unopt(struct optimized_kprobe *op); #else static inline void wait_for_kprobe_optimizer(void) { } #endif /* CONFIG_OPTPROBES */ |