summaryrefslogtreecommitdiffstats
path: root/arch/arm64
diff options
context:
space:
mode:
authorGuo Ren <guoren@linux.alibaba.com>2022-04-07 15:33:20 +0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2022-04-15 14:15:06 +0200
commite5b0d0a5515f82ce151d192e2c6304f3f143bb56 (patch)
tree835e5cbcab0a4dd1535c2c20a52beb3a38ca75f0 /arch/arm64
parentf3d97b22a708bf9e3f3ac2ba232bcefd0b0c136b (diff)
downloadlinux-stable-e5b0d0a5515f82ce151d192e2c6304f3f143bb56.tar.gz
linux-stable-e5b0d0a5515f82ce151d192e2c6304f3f143bb56.tar.bz2
linux-stable-e5b0d0a5515f82ce151d192e2c6304f3f143bb56.zip
arm64: patch_text: Fixup last cpu should be master
commit 31a099dbd91e69fcab55eef4be15ed7a8c984918 upstream. These patch_text implementations are using stop_machine_cpuslocked infrastructure with atomic cpu_count. The original idea: When the master CPU patch_text, the others should wait for it. But current implementation is using the first CPU as master, which couldn't guarantee the remaining CPUs are waiting. This patch changes the last CPU as the master to solve the potential risk. Fixes: ae16480785de ("arm64: introduce interfaces to hotpatch kernel and module code") Signed-off-by: Guo Ren <guoren@linux.alibaba.com> Signed-off-by: Guo Ren <guoren@kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20220407073323.743224-2-guoren@kernel.org Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch/arm64')
-rw-r--r--arch/arm64/kernel/insn.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index cd37edbdedcb..bc5112b471a5 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -204,8 +204,8 @@ static int __kprobes aarch64_insn_patch_text_cb(void *arg)
int i, ret = 0;
struct aarch64_insn_patch *pp = arg;
- /* The first CPU becomes master */
- if (atomic_inc_return(&pp->cpu_count) == 1) {
+ /* The last CPU becomes master */
+ if (atomic_inc_return(&pp->cpu_count) == num_online_cpus()) {
for (i = 0; ret == 0 && i < pp->insn_cnt; i++)
ret = aarch64_insn_patch_text_nosync(pp->text_addrs[i],
pp->new_insns[i]);