summaryrefslogtreecommitdiffstats
path: root/include/crypto/cryptd.h
diff options
context:
space:
mode:
authorEugeniy Paltsev <eugeniy.paltsev@synopsys.com>2019-01-30 19:32:40 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-03-23 14:35:16 +0100
commit8c8561afce39f258465de68c02d7457cee0d5836 (patch)
treeabc68752254212bbf8fb134252d8a183aac7ae86 /include/crypto/cryptd.h
parentb0c4ed01329827cb9ee5a8112abcdb76fe6c1dd5 (diff)
downloadlinux-stable-8c8561afce39f258465de68c02d7457cee0d5836.tar.gz
linux-stable-8c8561afce39f258465de68c02d7457cee0d5836.tar.bz2
linux-stable-8c8561afce39f258465de68c02d7457cee0d5836.zip
ARCv2: lib: memcpy: fix doing prefetchw outside of buffer
[ Upstream commit f8a15f97664178f27dfbf86a38f780a532cb6df0 ] ARCv2 optimized memcpy uses PREFETCHW instruction for prefetching the next cache line but doesn't ensure that the line is not past the end of the buffer. PRETECHW changes the line ownership and marks it dirty, which can cause data corruption if this area is used for DMA IO. Fix the issue by avoiding the PREFETCHW. This leads to performance degradation but it is OK as we'll introduce new memcpy implementation optimized for unaligned memory access using. We also cut off all PREFETCH instructions at they are quite useless here: * we call PREFETCH right before LOAD instruction call. * we copy 16 or 32 bytes of data (depending on CONFIG_ARC_HAS_LL64) in a main logical loop. so we call PREFETCH 4 times (or 2 times) for each L1 cache line (in case of 64B L1 cache Line which is default case). Obviously this is not optimal. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'include/crypto/cryptd.h')
0 files changed, 0 insertions, 0 deletions