diff options
author | Gerhard Engleder <gerhard@engleder-embedded.com> | 2023-10-11 20:21:54 +0200 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2023-10-16 09:59:34 +0100 |
commit | dccce1d7c04051bc25d3abbe7716d0ae7af9c28a (patch) | |
tree | 45c8dd119d22e9cdc01e81e2f810b72e17ddba56 /Documentation/power | |
parent | d8118b945f03dbfcda72c273fa9b0548f73c8ce9 (diff) | |
download | linux-dccce1d7c04051bc25d3abbe7716d0ae7af9c28a.tar.gz linux-dccce1d7c04051bc25d3abbe7716d0ae7af9c28a.tar.bz2 linux-dccce1d7c04051bc25d3abbe7716d0ae7af9c28a.zip |
tsnep: Inline small fragments within TX descriptor
The tsnep network controller is able to extend the descriptor directly
with data to be transmitted. In this case no TX data DMA address is
necessary. Instead of the TX data DMA address the TX data buffer is
placed at the end of the descriptor.
The descriptor is read with a 64 bytes DMA read by the tsnep network
controller. If the sum of descriptor data and TX data is less than or
equal to 64 bytes, then no additional DMA read is necessary to read the
TX data. Therefore, it makes sense to inline small fragments up to this
limit within the descriptor ring.
Inlined fragments need to be copied to the descriptor ring. On the other
hand DMA mapping is not necessary. At most 40 bytes are copied, so
copying should be faster than DMA mapping.
For A53 1.2 GHz copying takes <100ns and DMA mapping takes >200ns. So
inlining small fragments should result in lower CPU load. Performance
improvement is small. Thus, comparision of CPU load with and without
inlining of small fragments did not show any significant difference.
With this optimization less DMA reads will be done, which decreases the
load of the interconnect.
Signed-off-by: Gerhard Engleder <gerhard@engleder-embedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'Documentation/power')
0 files changed, 0 insertions, 0 deletions