?

Log in

No account? Create an account
The Conversation Pit [entries|archive|friends|userinfo]
Rob Landley

[ website | My Website ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Bufferbloat, and why the "net neutrality" debate happened. [Dec. 17th, 2010|08:14 pm]
Rob Landley
[Current Location |Same couch.]
[mood |curiouscurious]

Jim Gettys (creator of X11) stumbled onto something interesting recently but seriously
cannot summarize. So I'll try.

Adding buffers to routers has screwed up the internet, and as the buffers get bigger the problem gets worse. This is why your whole house's net becomes unusable when a single person is uploading something big. This is why some large ISPs (like clear) can have 6 second ping times when the system is loaded: your packet is queued behind others in enormous buffers. Playing WoW and such under those circumstances is impossible. Doing DNS lookups can become impossible (they time out before the reply comes back), thus you can't use the net at all when it gets really bad.

The problem is that the main protocol behind the internet (TCP/IP) was designed to figure out how fast connections can go by starting slow and constantly going slightly faster until it starts losing packets, at which point it slows down again. It does this constantly: a certain amount of packet loss is not just expected but _required_ in net connections, or else they'll keep trying to go faster and faster forever. (After all, you don't know who ELSE is using the internet or when they stop, so you don't know how fast you can really go until you try.)

But pointy-haired managers in the past decade or so decided that losing packets ever was unacceptable, thus they inserted megabytes of buffering in the routers to capture them all and hold them until they can be transmitted. And those stale packets sit around in buffers for a long time, and screw things up.

The end result is crazy bursts of throughput and multisecond latency spikes. If you try to upload a large file (or the other end tries to _send_ you a large file), they can stuff megabytes of data into a buffer and then anything else you try to send waits in line behind that, delayed for a long time. If the delay gets large enough, the connection will time out and tell the other end to retransmit, and then ignore the out-of-sequence data delivered in the meantime because it's not what it's expecting. So the buffered packets are wasted, and just slow down the arrival of the next set of interesting packets.

This problem has been nicknamed "bufferbloat". One reason it wasn't noticed earlier is that good engineers are scientists, and the scientific method suggests isolating variables so you test one thing at a time, and this is a case of high throughput screwing up your latency so you have to test _both_ at the same time to see it. If you just test throughput or just test latency (which they do, a lot), you don't get enough information to disagnose the problem. When you saturate a buffered link for a while, your ping times go into the multi-second range, but you have to know that in order to look for it. The _fact_ it was misdiagnosed for years has allowed it to grow gradually worse for a long time.

One saving grace of all this was that Windows XP and earlier were wind-up toy operating systems designed for dialup connections: when web browsing and such, it would only ever send 64k of data before waiting for the other end to acknowledge receipt. This hugely limited how fast it could ever go on broadband connections (plugging gigabit ethernet into windows XP is a waste of time), but it also meant that the majority of the traffic on the internet could never pump megabytes of data into these buffers to tie up the next 30 seconds of your net connection. The very patheticness of windows XP also meant it couldn't stress the system enough to cause it to fail.

Now that people are replacing Windows XP with newer versions that aren't limited to 64k default transmission windows, the problem is getting suddenly and dramatically worse. (Note: a windows xp system can still wind up with multi-second delays if somebody _else_ in the house is using a non-toy OS. Just because it can't fill a buffer doesn't mean its packets can't be stuck behind a large buffer somebody else filled. And a house with several people watching streaming video at once can also trigger this, even if an individual stream doesn't. And when somebody else triggers it, they suffer more, because instead of waiting 5 seconds for the next 500k they're waiting 5 seconds for the next 64k. Using old versions of windows doesn't actualy fix anything, it was just less capable of triggering the problem.)

So now you're ready to read the second article, which boils down to the fact that bittorrent has been triggering bufferbloat for years (apparently even from those old versions of windows), the ISPs didn't understand what was going wrong (they misdiagnosed the problem like everybody else), and they started damaging their networks in a misguided attempt to fix a problem they didn't understand. (And of course once politics are involved in engineering decisions, all bets are off...)
linkReply

Comments:
From: gettys.myopenid.com
2011-01-11 08:01 pm (UTC)

Not just pointy haired managers...

Us engineers are equally guilty of not realizing that buffering needs to be minimized.

Also, many parallel TCP connections on XP can induce bufferbloat; this occurs in an increasing frequency due to changes in web browsers.
(Reply) (Thread)