diff options
author | Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> | 2018-07-26 15:05:37 +0300 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2018-07-29 12:33:30 -0700 |
commit | 9939a46d90c6c76f4533d534dbadfa7b39dc6acc (patch) | |
tree | e8f9af52182bc85cf5af89556c393280320bb151 /net/openvswitch | |
parent | 383d470936c05554219094a4d364d964cb324827 (diff) | |
download | linux-stable-9939a46d90c6c76f4533d534dbadfa7b39dc6acc.tar.gz linux-stable-9939a46d90c6c76f4533d534dbadfa7b39dc6acc.tar.bz2 linux-stable-9939a46d90c6c76f4533d534dbadfa7b39dc6acc.zip |
NET: stmmac: align DMA stuff to largest cache line length
As for today STMMAC_ALIGN macro (which is used to align DMA stuff)
relies on L1 line length (L1_CACHE_BYTES).
This isn't correct in case of system with several cache levels
which might have L1 cache line length smaller than L2 line. This
can lead to sharing one cache line between DMA buffer and other
data, so we can lose this data while invalidate DMA buffer before
DMA transaction.
Fix that by using SMP_CACHE_BYTES instead of L1_CACHE_BYTES for
aligning.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/openvswitch')
0 files changed, 0 insertions, 0 deletions