We love faster Internet, so when we heard that Google was developing a new faster protocol for accessing the web, we got really excited. The new protocol is called QUIC, for “Quick UDP Internet Connections”, and is described here, and here, but in a nutshell, it is a reinvention of TCP built on top of UDP with a bunch of modern practices thrown in. Stream multiplexing, forward-error correction, congestion control, compression, and encryption, all without any of the baggage of TCP, so Google is free to experiment with window sizes, framing, ACKs, and all the fun bits of a protocol.
Google is developing the protocol largely in the open, with their rationale and design docs made public, and all the source code living under the Chromium open source project. We have been watching its development closely since June, and with it seeming to approach stability, we decided it was a good time to put it through its paces to see how HTTP over QUIC (which I will just call “QUIC”, for simplicity, for the rest of this blog) compares with HTTP over TCP (which I will mostly just call “regular HTTP”) on some real-world problems.
Our first focus was around the fact that QUIC is built on UDP, rather than on TCP, which got us wondering, would it help solve two common problems inherent in TCP?
- Small amounts of packet loss severely impact file transfers
- High-latency transfers are limited by TCP window scaling
These are problems that we care deeply about here at Connectify. Our customers are using our software to share, load balance and channel bond Wi-Fi, 3G, 4G and often satellite Internet connections. Any technology that helps them overcome the wireless loss and latency that is slowing them down, is something that we’re excited about.
QUIC could potentially address both of those problems. To test, we transferred a pseudo-random (so that compression can’t help) 10MB file using both regular HTTP and QUIC, under a series of simulated network conditions. We simulate different network conditions using off-the-shelf router hardware and a couple packages that we wrote and open-sourced (relying on some very excellent packages already in the Linux kernel for the hard parts!). Our testing scripts are all available on Github.
The conclusion is that QUIC doesn’t address either of these problems just yet. It is very commendable just how close to regular HTTP it is already, and we’ll be looking to address these problems with patches, as well as testing other scenarios.
How does QUIC compare to regular HTTP as packet loss increases?
QUIC includes forward-error correction (FEC), which should smooth over packet loss. We measured the time it took to transfer a 10MB file via QUIC and regular HTTP, with 160ms round-trip time and a 5Mbps maximum connection speed, dialing up the packet loss between 0% and 5%. Unfortunately, the results show QUIC suffering to nearly the same extent as regular HTTP.
Ouch. Really, just look at that chart again. A loss of just one packet in a hundred, causes you to lose more than 80% of your speed with TCP, and QUIC is only a little better. In many places just getting 1% loss on your wireless link is pretty good. There’s a real opportunity here for someone to do way more than double real world wireless goodput. But QUIC isn’t there yet, which seemed odd given that Forward-Error Correction (FEC) is part of the protocol.
As it turns out, FEC is not enabled in the sample client and server, and it does not look enabled in Chrome, either. The main reason that FEC isn’t enabled yet is because there isn’t a consensus on how it should be used (discussion here). But, it is in the protocol, so this should hopefully be fixed in the future.
|Method||Packet Loss (%)||Transfer time (s)||Goodput (Mbps)|
Round-Trip Time: 160ms
Max Download: 5Mbps
Max Upload: 5Mbps
How does QUIC compare to regular HTTP as latency increases?
As latency increases, QUIC doesn’t appear to compensate as well as TCP does. We again measured the time it took to transfer a 10MB file via QUIC and regular HTTP, this time with a 10Mbps maximum connection speed and 0% packet loss, dialing up the round-trip time between 80ms and 800ms.
QUIC suffered a greater degradation at high latency than regular old HTTP over TCP. The TCP stack in Linux has had a lot of time to improve its window scaling, but with its own ACK scheme, in-flight window and a bit of FEC, QUIC has the potential to overcome this.
|Method||Transfer Time (s)||Round-Trip Time (ms)||Goodput (Mbps)|
Packet Loss: 0%
Max Download: 10Mbps
Max Upload: 10Mbps
How does QUIC compare to regular HTTP as maximum connection speed increases?
So just how fast can QUIC go today without high latency or packet loss? We again measured the time it took to transfer a 10MB file via QUIC and regular HTTP, this time with a round-trip time of 20ms and 0% packet loss, dialing up the maximum connection speed between 2Mbps and 70Mbps.
QUIC includes packet pacing with millisecond resolution, which appears to cap the goodput at just under 9Mbps. That is, QUIC will currently never send more than 1,000 packets per second, no matter how fast the Internet connection. In fact, many machines don’t have cheap-to-access clocks with resolution greater than 1 millisecond, and even then, we’ve seen plenty of modern systems with timers that jump in multiple-millisecond steps, so many machines won’t be able to reach even that 9Mbps speed. In the long-run, timer-based packet pacing doesn’t seem like the right solution, but they’re looking at other options, and I have to think that this will get ironed out, as well.
|Method||Transfer Time (s)||Max Download (Mbps)||Max Upload (Mbps)||Goodput (Mbps)|
Round-Trip Time: 20ms
Packet Loss: 0%
The client-side machine was a Toshiba Satellite A665-S6100X with an i7-2630QM CPU @ 2.00GHz, running Fedora 19 x86_64.
The server-side machine was a Toshiba Satellite L645D-54030 with an AMD Turion II P520 @ 2.30GHz, running Fedora 19 x86_64.
The WAN-emulating router was a TP-Link TL-WR1043ND running OpenWrt trunk.
QUIC is real, and I have seen it work! But it doesn’t yet provide the advantages that we expect to see out of it in the near future: on links with loss, the Forward-Error Correction could be a game changer; on links with high latency, its ability to create new ACK/NAK/windowing schemes should allow it to deliver throughputs much higher than TCP… but today these are aspirations, not reality.
Now that we’re set up here to test QUIC performance, we’re going to keep watching, building and, if anything exciting happens, sharing these results with you. We love the vision, and can see that Google is going in the right direction at high speed, so expect to hear more about QUIC here in the near future.
Update: Google’s QUIC team has responded to this post here
Share this Post