summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorYang Jihong <yangjihong1@huawei.com>2023-02-21 08:49:16 +0900
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2023-03-11 16:31:50 +0100
commit4334c26f53585a45455af324c08a4b0036bfaa8d (patch)
treec39239aa55765a5ebff035d36ebed4c4d39df44b /kernel
parent5337bbff9899e2cdcba76e758ee57799bd5beef2 (diff)
downloadlinux-stable-4334c26f53585a45455af324c08a4b0036bfaa8d.tar.gz
linux-stable-4334c26f53585a45455af324c08a4b0036bfaa8d.tar.bz2
linux-stable-4334c26f53585a45455af324c08a4b0036bfaa8d.zip
x86/kprobes: Fix __recover_optprobed_insn check optimizing logic
commit 868a6fc0ca2407622d2833adefe1c4d284766c4c upstream. Since the following commit: commit f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code") modified the update timing of the KPROBE_FLAG_OPTIMIZED, a optimized_kprobe may be in the optimizing or unoptimizing state when op.kp->flags has KPROBE_FLAG_OPTIMIZED and op->list is not empty. The __recover_optprobed_insn check logic is incorrect, a kprobe in the unoptimizing state may be incorrectly determined as unoptimizing. As a result, incorrect instructions are copied. The optprobe_queued_unopt function needs to be exported for invoking in arch directory. Link: https://lore.kernel.org/all/20230216034247.32348-2-yangjihong1@huawei.com/ Fixes: f66c0447cca1 ("kprobes: Set unoptimized flag after unoptimizing code") Cc: stable@vger.kernel.org Signed-off-by: Yang Jihong <yangjihong1@huawei.com> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/kprobes.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 33aba4e2e3e3..721bc92b6958 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -626,7 +626,7 @@ void wait_for_kprobe_optimizer(void)
mutex_unlock(&kprobe_mutex);
}
-static bool optprobe_queued_unopt(struct optimized_kprobe *op)
+bool optprobe_queued_unopt(struct optimized_kprobe *op)
{
struct optimized_kprobe *_op;