diff options
author | Compostella, Jeremy <jeremy.compostella@intel.com> | 2020-10-10 04:42:34 +0800 |
---|---|---|
committer | mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> | 2020-10-16 01:12:05 +0000 |
commit | d25fd8710d6c8fc11582210fb1f8480c0d98416b (patch) | |
tree | c8da622ea074b171f13b3244125cd3d05d11c7e7 /MdePkg/Library/BaseLib/RiscV64 | |
parent | 19c87b7d446c3273e84b238cb02cd1c0ae69c43e (diff) | |
download | edk2-d25fd8710d6c8fc11582210fb1f8480c0d98416b.tar.gz edk2-d25fd8710d6c8fc11582210fb1f8480c0d98416b.tar.bz2 edk2-d25fd8710d6c8fc11582210fb1f8480c0d98416b.zip |
BaseMemoryLibSse2: Take advantage of write combining buffers
The current SSE2 implementation of the ZeroMem(), SetMem(),
SetMem16(), SetMem32 and SetMem64 functions is writing 16 bytes per 16
bytes. It hurts the performances so bad that this is even slower than
a simple 'rep stos' (4% slower) in regular DRAM.
To take full advantages of the 'movntdq' instruction it is better to
"queue" a total of 64 bytes in the write combining buffers. This
patch implement such a change. Below is a table where I measured
(with 'rdtsc') the time to write an entire 100MB RAM buffer. These
functions operate almost two times faster.
| Function | Arch | Untouched | 64 bytes | Result |
|----------+------+-----------+----------+--------|
| ZeroMem | Ia32 | 17765947 | 9136062 | 1.945x |
| ZeroMem | X64 | 17525170 | 9233391 | 1.898x |
| SetMem | Ia32 | 17522291 | 9137272 | 1.918x |
| SetMem | X64 | 17949261 | 9176978 | 1.956x |
| SetMem16 | Ia32 | 18219673 | 9372062 | 1.944x |
| SetMem16 | X64 | 17523331 | 9275184 | 1.889x |
| SetMem32 | Ia32 | 18495036 | 9273053 | 1.994x |
| SetMem32 | X64 | 17368864 | 9285885 | 1.870x |
| SetMem64 | Ia32 | 18564473 | 9241362 | 2.009x |
| SetMem64 | X64 | 17506951 | 9280148 | 1.886x |
Signed-off-by: Jeremy Compostella <jeremy.compostella@intel.com>
Reviewed-by: Liming Gao <gaoliming@byosoft.com.cn>
Diffstat (limited to 'MdePkg/Library/BaseLib/RiscV64')
0 files changed, 0 insertions, 0 deletions