summaryrefslogtreecommitdiffstats
path: root/net/ipv4/tcp_input.c
Commit message (Collapse)AuthorAgeFilesLines
* [NET]: Transform skb_queue_len() binary tests into skb_queue_empty()David S. Miller2005-07-081-6/+5
| | | | | | | | | | | | | This is part of the grand scheme to eliminate the qlen member of skb_queue_head, and subsequently remove the 'list' member of sk_buff. Most users of skb_queue_len() want to know if the queue is empty or not, and that's trivially done with skb_queue_empty() which doesn't use the skb_queue_head->qlen member and instead uses the queue list emptyness as the test. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Move to new TSO segmenting scheme.David S. Miller2005-07-051-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | Make TSO segment transmit size decisions at send time not earlier. The basic scheme is that we try to build as large a TSO frame as possible when pulling in the user data, but the size of the TSO frame output to the card is determined at transmit time. This is guided by tp->xmit_size_goal. It is always set to a multiple of MSS and tells sendmsg/sendpage how large an SKB to try and build. Later, tcp_write_xmit() and tcp_push_one() chop up the packet if necessary and conditions warrant. These routines can also decide to "defer" in order to wait for more ACKs to arrive and thus allow larger TSO frames to be emitted. A general observation is that TSO elongates the pipe, thus requiring a larger congestion window and larger buffering especially at the sender side. Therefore, it is important that applications 1) get a large enough socket send buffer (this is accomplished by our dynamic send buffer expansion code) 2) do large enough writes. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Break out send buffer expansion test.David S. Miller2005-07-051-4/+23
| | | | | | | This makes it easier to understand, and allows easier tweaking of the heuristic later on. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Do not call tcp_tso_acked() if no work to do.David S. Miller2005-07-051-1/+2
| | | | | | | In tcp_clean_rtx_queue(), if the TSO packet is not even partially acked, do not waste time calling tcp_tso_acked(). Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Kill bogus comment above tcp_tso_acked().David S. Miller2005-07-051-9/+0
| | | | | | | | Everything stated there is out of data. tcp_trim_skb() does adjust the available socket send buffer space and skb->truesize now. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix __tcp_push_pending_frames() 'nonagle' handling.David S. Miller2005-07-051-10/+7
| | | | | | | | | | | | | | | | | | | | | 'nonagle' should be passed to the tcp_snd_test() function as 'TCP_NAGLE_PUSH' if we are checking an SKB not at the tail of the write_queue. This is because Nagle does not apply to such frames since we cannot possibly tack more data onto them. However, while doing this __tcp_push_pending_frames() makes all of the packets in the write_queue use this modified 'nonagle' value. Fix the bug and simplify this function by just calling tcp_write_xmit() directly if sk_send_head is non-NULL. As a result, we can now make tcp_data_snd_check() just call tcp_push_pending_frames() instead of the specialized __tcp_data_snd_check(). Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Move __tcp_data_snd_check into tcp_output.cDavid S. Miller2005-07-051-10/+0
| | | | | | | | It reimplements portions of tcp_snd_check(), so it we move it to tcp_output.c we can consolidate it's logic much easier in a later change. Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Add pluggable congestion control algorithm infrastructure.Stephen Hemminger2005-06-231-684/+53
| | | | | | | | | | Allow TCP to have multiple pluggable congestion control algorithms. Algorithms are defined by a set of operations and can be built in or modules. The legacy "new RENO" algorithm is used as a starting point and fallback. Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [TCP]: Fix stretch ACK performance killer when doing ucopy.David S. Miller2005-05-231-10/+1
| | | | | | | | | | | | | | | | When we are doing ucopy, we try to defer the ACK generation to cleanup_rbuf(). This works most of the time very well, but if the ucopy prequeue is large, this ACKing behavior kills performance. With TSO, it is possible to fill the prequeue so large that by the time the ACK is sent and gets back to the sender, most of the window has emptied of data and performance suffers significantly. This behavior does help in some cases, so we should think about re-enabling this trick in the future, using some kind of limit in order to avoid the bug case. Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] update Ross Biro bouncing email addressJesper Juhl2005-05-051-1/+1
| | | | | | | | | Ross moved. Remove the bad email address so people will find the correct one in ./CREDITS. Signed-off-by: Jesper Juhl <juhl-lkml@dif.dk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [TCP]: Trivial tcp_data_queue() cleanupJames Morris2005-04-251-1/+0
| | | | | | | | This patch removes a superfluous intialization from tcp_data_queue(). Signed-off-by: James Morris <jmorris@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Linux-2.6.12-rc2v2.6.12-rc2Linus Torvalds2005-04-161-0/+4959
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!