Leave a Comment:
(13) comments
Hi Kary,
Thanks for the video!
How could you add a filter tcp.analysis.segments_acked == xx and “Buffer bytes” column?
I don’t have it in my Wireshark.. Is it some custom programming?
BR,
Vladimir
Great Video! Thanks for your work.
Will you commit your changes to Wireshark?
I would really appreciate this!
If not, can you provide the diff for your changes?
Thanks!
ReplyHey Kary,
About the delayed ACK timer…
You said the receiving TCP starts the timer on segment arrival, and kills the timer when a non-timer event (2nd segment) rolls in.
Do you know if it really works this way? I’m under the impression that it’s common for stacks to run a periodic timer (like a metronome), rather than one-shot timers for these sorts of events. The upshot of that distinction is that rather than delaying ACKs *exactly* 200ms, ACKs get delayed *up to* 200ms.
Thanks!
ReplyHa!
I’m sorry if it came across as nitpicking. I was really wondering if you knew something about the specific stack you were looking at in the example.
I mean… It’s possible that stacks (especially offload engines) do something new since Stevens, right? :)
Also, this got me wondering… It *feels* like TCP implementations should have per-socket delayed ACK timers, doesn’t it? Flooding the link with lots of delayed ACKs *all at the same instant* because the server’s got lots of sockets open seems to defeat the purpose.
Dammit Kary, every time you post a video, I wind up in the lab. Why is that?
ReplyThe stack was Windows 2008. This was 6 months ago so if I came across a source that explained Windows timers, I forgot by now. Stevens was written about BSD but it’s good enough most of the time :)
Didn’t we (drunkenly) argue over TCP timers or some other TCP detail at Networking Field Day? Haha
There’s a lot more that I don’t know versus I do know, so my only real job here is to inspire. So go get in that lab!
My only complaint is you keep changing your email address and making me approve your comments. I approve of you, Chris, I approve.
ReplyHey Kary,
I think the answer to “why does this only happen when using 64KB send buffer?” is here:
https://support.microsoft.com/en-us/kb/823764
It’s clear (ish) from the PSH flag that the application is performing 64KB writes, and the send window was 64KB, so it correlates nicely.
Didn’t this non-windowed buffer behavior inside the TCP from Redmond appear in the comments on one of your previous posts?
Also, I have a theory about the occasional single-segment, non-delayed ACK: Do lots of them show up at 200ms intervals? If so, they could be the result of the periodic delayed ACK timer firing between the arrival of two segments which would otherwise have produced an ACK for two segments.
ReplyNice find with the KB. Simon and I found it as well and we thought it was the likeliest explanation we could find, though I could repro the issue on Tomcat with blocking IO, so it doesn’t completely add up.
I went back and read through my email on this and found this:
I’ve reproduced the issue with two Win2k8R2 AWS instances. It’s definitely a combination of delayed ACK and a crappy send application behavior. The inconsistent delayed ACK issue seems triggered when the sending side uses 64k buffers. If I alter the tomcat config to use non-blocking IO and larger buffers (http://javaagile.blogspot.com/2010/08/tomcat-tuning.html – though I increased the send buffer to 1MB in the config), I saw zero delayed ACKS and much better throughput.
I had the same thought about the non-delayed ACKs today. I’ll take a look through the pcap. It’s linked under the video if you wanna take a look too.
ReplyHi Kary, loved the video and the explanation!
Is there anyway you can show us the before and after performance of this? (From the real world case scenario)
Thank you
ReplyThis is great, I love the site and am surprised I hadn’t seen this video, because it’s really relevant since I’m troubleshooting a very similar issue right now with Tomcat version 6.
Someone from the wireshark thread linked it for me.
https://ask.wireshark.org/questions/56957/http-server-limiting-transfer-rate
In my case there is 220ms of latency (long distance WAN), and the server is appearing to use a 9000 Byte buffer, so it is way slower! This problem presented itself after we removed WAN acceleration.
I analyzed and told the server team that the server was causing the delay, and it was due to the server only sending 9K, then WAITING for ACK’s, though I think delayed ACK may be a secondary factor, since it is transferring an odd numbers of packets (7), then pausing:
1) 1460B
2) 1460B
3) 1460B
4) 1460B
5) 1460B
6) 1460B
7) 240B
Total 9000B
WAIT for ACK…
Repeat.
—–
The buffer that appears to be holding the transfer up is:
socketBuffer
https://tomcat.apache.org/tomcat-6.0-doc/config/http.html
“The size (in bytes) of the buffer to be provided for socket output buffering. -1 can be specified to disable the use of a buffer. By default, a buffers of 9000 bytes will be used.”
Here’s a link in Cloudshark to one of the captures I’m working with. (With IP’s and details obfuscated using tracewrangler)
https://www.cloudshark.org/captures/94b162026653
Reply