diff options
author | Nick Child <nnac123@linux.ibm.com> | 2024-08-01 16:12:15 -0500 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2024-08-02 16:39:44 -0700 |
commit | b5381a5540cbb7c18642a3280cb3906160bd6546 (patch) | |
tree | 3a9e56e06a6e1176168b989c85e8c43f2498b2e0 /net/tipc | |
parent | f128c7cf0530cd104d1370648c29eff0b582700f (diff) | |
download | linux-stable-b5381a5540cbb7c18642a3280cb3906160bd6546.tar.gz linux-stable-b5381a5540cbb7c18642a3280cb3906160bd6546.tar.bz2 linux-stable-b5381a5540cbb7c18642a3280cb3906160bd6546.zip |
ibmveth: Recycle buffers during replenish phase
When the length of a packet is under the rx_copybreak threshold, the
buffer is copied into a new skb and sent up the stack. This allows the
dma mapped memory to be recycled back to FW.
Previously, the reuse of the DMA space was handled immediately.
This means that further packet processing has to wait until
h_add_logical_lan finishes for this packet.
Therefore, when reusing a packet, offload the hcall to the replenish
function. As a result, much of the shared logic between the recycle and
replenish functions can be removed.
This change increases TCP_RR packet rate by another 15% (370k to 430k
txns). We can see the ftrace data supports this:
PREV: ibmveth_poll = 8078553.0 us / 190999.0 hits = AVG 42.3 us
NEW: ibmveth_poll = 7632787.0 us / 224060.0 hits = AVG 34.07 us
Signed-off-by: Nick Child <nnac123@linux.ibm.com>
Reviewed-by: Shannon Nelson <shannon.nelson@amd.com>
Link: https://patch.msgid.link/20240801211215.128101-3-nnac123@linux.ibm.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/tipc')
0 files changed, 0 insertions, 0 deletions