summaryrefslogtreecommitdiffstats
path: root/include/net
diff options
context:
space:
mode:
authorJakub Kicinski <kuba@kernel.org>2023-10-25 12:23:36 -0700
committerJakub Kicinski <kuba@kernel.org>2023-10-25 12:23:37 -0700
commit8846f9a04b10b7f61214425409838d764df7080d (patch)
treecc10ea2e27fb56390fbe54cc50afd95a8f64cfb4 /include/net
parentaad36cd32982d59470de6365f97f154c5af5d1d2 (diff)
parent8005184fd1ca6aeb3fea36f4eb9463fc1b90c114 (diff)
downloadlinux-8846f9a04b10b7f61214425409838d764df7080d.tar.gz
linux-8846f9a04b10b7f61214425409838d764df7080d.tar.bz2
linux-8846f9a04b10b7f61214425409838d764df7080d.zip
Merge branch 'mptcp-features-and-fixes-for-v6-7'
Mat Martineau says: ==================== mptcp: Features and fixes for v6.7 Patch 1 adds a configurable timeout for the MPTCP connection when all subflows are closed, to support break-before-make use cases. Patch 2 is a fix for a 1-byte error in rx data counters with MPTCP fastopen connections. Patch 3 is a minor code cleanup. Patches 4 & 5 add handling of rcvlowat for MPTCP sockets, with a prerequisite patch to use a common scaling ratio between TCP and MPTCP. Patch 6 improves efficiency of memory copying in MPTCP transmit code. Patch 7 refactors syncing of socket options from the MPTCP socket to its subflows. Patches 8 & 9 help the MPTCP packet scheduler perform well by changing the handling of notsent_lowat in subflows and how available buffer space is calculated for MPTCP-level sends. ==================== Link: https://lore.kernel.org/r/20231023-send-net-next-20231023-2-v1-0-9dc60939d371@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'include/net')
-rw-r--r--include/net/tcp.h12
1 files changed, 7 insertions, 5 deletions
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 39b731c900dd..993b7fcd4e46 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1489,13 +1489,15 @@ static inline int tcp_space_from_win(const struct sock *sk, int win)
return __tcp_space_from_win(tcp_sk(sk)->scaling_ratio, win);
}
+/* Assume a conservative default of 1200 bytes of payload per 4K page.
+ * This may be adjusted later in tcp_measure_rcv_mss().
+ */
+#define TCP_DEFAULT_SCALING_RATIO ((1200 << TCP_RMEM_TO_WIN_SCALE) / \
+ SKB_TRUESIZE(4096))
+
static inline void tcp_scaling_ratio_init(struct sock *sk)
{
- /* Assume a conservative default of 1200 bytes of payload per 4K page.
- * This may be adjusted later in tcp_measure_rcv_mss().
- */
- tcp_sk(sk)->scaling_ratio = (1200 << TCP_RMEM_TO_WIN_SCALE) /
- SKB_TRUESIZE(4096);
+ tcp_sk(sk)->scaling_ratio = TCP_DEFAULT_SCALING_RATIO;
}
/* Note: caller must be prepared to deal with negative returns */