diff options
author | Eric Dumazet <edumazet@google.com> | 2012-08-07 02:19:56 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2012-08-07 16:24:55 -0700 |
commit | 425f09ab7d1c9da6ca4137dd639cb6fe3f8a88f3 (patch) | |
tree | 49c586d8025f671a8e47415f4364bfb818534569 /include/net/neighbour.h | |
parent | e07b94f1352723994d8b588ac5ed8af91bcc9fb6 (diff) | |
download | linux-stable-425f09ab7d1c9da6ca4137dd639cb6fe3f8a88f3.tar.gz linux-stable-425f09ab7d1c9da6ca4137dd639cb6fe3f8a88f3.tar.bz2 linux-stable-425f09ab7d1c9da6ca4137dd639cb6fe3f8a88f3.zip |
net: output path optimizations
1) Avoid dirtying neighbour's confirmed field.
TCP workloads hits this cache line for each incoming ACK.
Lets write n->confirmed only if there is a jiffie change.
2) Optimize neigh_hh_output() for the common Ethernet case, were
hh_len is less than 16 bytes. Replace the memcpy() call
by two inlined 64bit load/stores on x86_64.
Bench results using udpflood test, with -C option (MSG_CONFIRM flag
added to sendto(), to reproduce the n->confirmed dirtying on UDP)
24 threads doing 1.000.000 UDP sendto() on dummy device, 4 runs.
before : 2.247s, 2.235s, 2.247s, 2.318s
after : 1.884s, 1.905s, 1.891s, 1.895s
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include/net/neighbour.h')
-rw-r--r-- | include/net/neighbour.h | 14 |
1 files changed, 9 insertions, 5 deletions
diff --git a/include/net/neighbour.h b/include/net/neighbour.h index 344d8988842a..0dab173e27da 100644 --- a/include/net/neighbour.h +++ b/include/net/neighbour.h @@ -334,18 +334,22 @@ static inline int neigh_hh_bridge(struct hh_cache *hh, struct sk_buff *skb) } #endif -static inline int neigh_hh_output(struct hh_cache *hh, struct sk_buff *skb) +static inline int neigh_hh_output(const struct hh_cache *hh, struct sk_buff *skb) { unsigned int seq; int hh_len; do { - int hh_alen; - seq = read_seqbegin(&hh->hh_lock); hh_len = hh->hh_len; - hh_alen = HH_DATA_ALIGN(hh_len); - memcpy(skb->data - hh_alen, hh->hh_data, hh_alen); + if (likely(hh_len <= HH_DATA_MOD)) { + /* this is inlined by gcc */ + memcpy(skb->data - HH_DATA_MOD, hh->hh_data, HH_DATA_MOD); + } else { + int hh_alen = HH_DATA_ALIGN(hh_len); + + memcpy(skb->data - hh_alen, hh->hh_data, hh_alen); + } } while (read_seqretry(&hh->hh_lock, seq)); skb_push(skb, hh_len); |