A lot of enterprises, operators and organizations block or rate-limit UDPtraffic outside of port 53 (used for DNS) since it has in recent days mostlybeen abused for attacks. In particular, some of the existing UDP protocols andpopular server implementations for them have been vulnerable for amplificationattacks where one attacker can make a huge amount of outgoing traffic totarget innocent victims.
QUIC has built-in mitigation against amplification attacks by requiring that theinitial packet must be at least 1200 bytes and by restriction in the protocolthat says that a server must not send more than three times the size of therequest in response without receiving a packet from the client in response.
This seems to be the truth, at least today in 2018. We can of course not tellhow this will develop and how much of this is simply the result of UDPtransfer performance not having been in developers’ focus for many years.
For most clients, this “slowness” is probably never even noticeable.
Similar to the “UDP is slow” remark above, this is partly because the TCP andTLS usage of the world has had a longer time to mature, improve and gethardware assistance.
There are reasons to expect this to improve over time. The question is how muchthis extra CPU usage will hurt deployers.
No it is not. Google brought the initial spec to the IETF after having proved,on a large Internet-wide scale, that deploying this style of protocol over UDPactually works and performs well.
Since then, individuals from a large number of companies and organizationshave worked in the vendor-neutral organization IETF to put together a standardtransport protocol out of it. In that work, Google employees have of coursebeen participating, but so have employees from a large number of othercompanies that are interested in furthering the state of transport protocolson the Internet, including Mozilla, Fastly, Cloudflare, Akamai, Microsoft,Facebook and Apple.
That is not really a critique but an opinion. Maybe it is, and maybe it is toolittle of an improvement so close in time since HTTP/2 was shipped.
HTTP/3 is likely to perform much better in packet loss-ridden networks, itoffers faster handshakes so it will improve latency both as perceived andactual. But is that enough of benefits to motivate people to deploy HTTP/3support on their servers and for their services? Time and future performancemeasurements will surely tell!