summaryrefslogtreecommitdiffstats
path: root/net/mac80211
diff options
context:
space:
mode:
authorToke Høiland-Jørgensen <toke@toke.dk>2018-02-02 16:11:05 +0100
committerJohannes Berg <johannes.berg@intel.com>2018-02-23 12:08:56 +0100
commit36148c2bbfbe50c50206b6f61d072203c80161e0 (patch)
treeaa85fdf870c82db17fee593d0d0efbe4671f71dd /net/mac80211
parent93c62c45ed5fad1b87e3a45835b251cd68de9c46 (diff)
downloadlinux-36148c2bbfbe50c50206b6f61d072203c80161e0.tar.gz
linux-36148c2bbfbe50c50206b6f61d072203c80161e0.tar.bz2
linux-36148c2bbfbe50c50206b6f61d072203c80161e0.zip
mac80211: Adjust TSQ pacing shift
Since we now have the convenient helper to do so, actually adjust the TSQ pacing shift for packets going out over a WiFi interface. This significantly improves throughput for locally-originated TCP connections. The default pacing shift of 10 corresponds to ~1ms of queued packet data. Adjusting this to a shift of 8 (i.e. ~4ms) improves 1-hop throughput for ath9k by a factor of 3, whereas increasing it more has diminishing returns. Achieved throughput for different values of sk_pacing_shift (average of 5 iterations of 10-sec netperf runs to a host on the other side of the WiFi hop): sk_pacing_shift 10: 43.21 Mbps (pre-patch) sk_pacing_shift 9: 78.17 Mbps sk_pacing_shift 8: 123.94 Mbps sk_pacing_shift 7: 128.31 Mbps Latency for competing flows increases from ~3 ms to ~10 ms with this change. This is about the same magnitude of queueing latency induced by flows that are not originated on the WiFi device itself (and so are not limited by TSQ). Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Diffstat (limited to 'net/mac80211')
-rw-r--r--net/mac80211/tx.c8
1 files changed, 8 insertions, 0 deletions
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index 25904af38839..69722504e3e1 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -3574,6 +3574,14 @@ void __ieee80211_subif_start_xmit(struct sk_buff *skb,
if (!IS_ERR_OR_NULL(sta)) {
struct ieee80211_fast_tx *fast_tx;
+ /* We need a bit of data queued to build aggregates properly, so
+ * instruct the TCP stack to allow more than a single ms of data
+ * to be queued in the stack. The value is a bit-shift of 1
+ * second, so 8 is ~4ms of queued data. Only affects local TCP
+ * sockets.
+ */
+ sk_pacing_shift_update(skb->sk, 8);
+
fast_tx = rcu_dereference(sta->fast_tx);
if (fast_tx &&