summaryrefslogtreecommitdiffstats
path: root/src/include/cpu
diff options
context:
space:
mode:
authorArthur Heymans <arthur@aheymans.xyz>2019-11-25 12:20:01 +0100
committerLean Sheng Tan <sheng.tan@9elements.com>2023-03-13 13:42:32 +0000
commit3134a8152590f6d93232f6e56ab08fd87ebe1a0d (patch)
tree8a7569aa1f9f204e36cc7571bd2ed71eca714b24 /src/include/cpu
parent4bad919ce47fae7187dfc8ed0c0186a78fd10597 (diff)
downloadcoreboot-3134a8152590f6d93232f6e56ab08fd87ebe1a0d.tar.gz
coreboot-3134a8152590f6d93232f6e56ab08fd87ebe1a0d.tar.bz2
coreboot-3134a8152590f6d93232f6e56ab08fd87ebe1a0d.zip
cpu/x86/cache: CLFLUSH programs to memory before running
When cbmem is initialized in romstage and postcar placed in the stage cache + cbmem where it is run, the assumption is made that these are all in UC memory such that calling INVD in postcar is OK. For performance reasons (e.g. postcar decompression) it is desirable to cache cbmem and the stage cache during romstage. Another reason is that AGESA sets up MTRR during romstage to cache all dram, which is currently worked around by using additional MTRR's to make that UC. TESTED on asus/p5ql-em, up/squared on both regular and S3 resume bootpath. Sometimes there are minimal performance improvements when cbmem is cached (few ms). Change-Id: I7ff2a57aee620908b71829457ea0f5a0c410ec5b Signed-off-by: Arthur Heymans <arthur@aheymans.xyz> Reviewed-on: https://review.coreboot.org/c/coreboot/+/37196 Reviewed-by: Lean Sheng Tan <sheng.tan@9elements.com> Reviewed-by: Kapil Porwal <kapilporwal@google.com> Tested-by: build bot (Jenkins) <no-reply@coreboot.org>
Diffstat (limited to 'src/include/cpu')
-rw-r--r--src/include/cpu/x86/cache.h4
1 files changed, 4 insertions, 0 deletions
diff --git a/src/include/cpu/x86/cache.h b/src/include/cpu/x86/cache.h
index 27b727bcb9fb..63703a7871d6 100644
--- a/src/include/cpu/x86/cache.h
+++ b/src/include/cpu/x86/cache.h
@@ -12,6 +12,8 @@
#if !defined(__ASSEMBLER__)
+#include <stdbool.h>
+
static inline void wbinvd(void)
{
asm volatile ("wbinvd" ::: "memory");
@@ -27,6 +29,8 @@ static inline void clflush(void *addr)
asm volatile ("clflush (%0)"::"r" (addr));
}
+bool clflush_supported(void);
+
/* The following functions require the __always_inline due to AMD
* function STOP_CAR_AND_CPU that disables cache as
* RAM, the cache as RAM stack can no longer be used. Called