summaryrefslogtreecommitdiffstats
path: root/arch/x86
diff options
context:
space:
mode:
authorPawan Gupta <pawan.kumar.gupta@linux.intel.com>2022-08-02 15:47:02 -0700
committerBorislav Petkov <bp@suse.de>2022-08-03 14:12:18 +0200
commitba6e31af2be96c4d0536f2152ed6f7b6c11bca47 (patch)
tree50fe01fb6f53af58083693c3d0a7eecf4703b45e /arch/x86
parent2b1299322016731d56807aa49254a5ea3080b6b3 (diff)
downloadlinux-stable-ba6e31af2be96c4d0536f2152ed6f7b6c11bca47.tar.gz
linux-stable-ba6e31af2be96c4d0536f2152ed6f7b6c11bca47.tar.bz2
linux-stable-ba6e31af2be96c4d0536f2152ed6f7b6c11bca47.zip
x86/speculation: Add LFENCE to RSB fill sequence
RSB fill sequence does not have any protection for miss-prediction of conditional branch at the end of the sequence. CPU can speculatively execute code immediately after the sequence, while RSB filling hasn't completed yet. #define __FILL_RETURN_BUFFER(reg, nr, sp) \ mov $(nr/2), reg; \ 771: \ ANNOTATE_INTRA_FUNCTION_CALL; \ call 772f; \ 773: /* speculation trap */ \ UNWIND_HINT_EMPTY; \ pause; \ lfence; \ jmp 773b; \ 772: \ ANNOTATE_INTRA_FUNCTION_CALL; \ call 774f; \ 775: /* speculation trap */ \ UNWIND_HINT_EMPTY; \ pause; \ lfence; \ jmp 775b; \ 774: \ add $(BITS_PER_LONG/8) * 2, sp; \ dec reg; \ jnz 771b; <----- CPU can miss-predict here. Before RSB is filled, RETs that come in program order after this macro can be executed speculatively, making them vulnerable to RSB-based attacks. Mitigate it by adding an LFENCE after the conditional branch to prevent speculation while RSB is being filled. Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com> Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Borislav Petkov <bp@suse.de>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/include/asm/nospec-branch.h4
1 files changed, 3 insertions, 1 deletions
diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
index 4c9ba49d9b3e..d3a3cc6772ee 100644
--- a/arch/x86/include/asm/nospec-branch.h
+++ b/arch/x86/include/asm/nospec-branch.h
@@ -60,7 +60,9 @@
774: \
add $(BITS_PER_LONG/8) * 2, sp; \
dec reg; \
- jnz 771b;
+ jnz 771b; \
+ /* barrier for jnz misprediction */ \
+ lfence;
#ifdef __ASSEMBLY__