A Question of Priorities

Jerry Brito, a legal scholar who’s a fellow at the Mercatus Center, posted a thoughtful piece about Internet priorities yesterday at the TLF blog, Let’s get our priorities straight. Jerry starts by commenting on a recent piece by Litan and Singer on prioritization for a fee and asks if it’s even practical:

What Litan and Singer explain about price discrimination and product differentiation is absolutely correct. What I don’t think they understand is that “priority delivery” of content over the internet is not within ISPs’ technical capability.

First, let’s be clear what we’re talking about here. As far as I understand it, net neutrality regulation would apply to communications over the internet, not simply to communications that happen to use internet protocol (IP). Consider services like ESPN360, or edge-caching a la Akamai, or imagine Google colocating their YouTube servers at Verizon’s offices. These techniques allow users to get packets over the internet quicker, but do nothing to violate neutrality. You can also imagine Comcast offering an IP-based video game service that, much like cable TV, creates a fast direct connection between Comcast and the user. In a sense, this is prioritization of packets, but not over the internet. If I’m not wildly off-base, this would not violate any of the major net neutrality proposals, either.

So if we’re not talking about co-locating, edge-caching, or creating separate IP-based networks, how exactly would ISPs offer “priority delivery” of packets over the internet? The very nature of the internet is that it is a best-efforts network. Short of a re-engineering of the net, any “additional or differentiated services,” as outlined in the Google-Verizon proposal, would have to necessarily stand apart from the internet because, as much as they’d like to, ISPs cannot prioritize packets over the internet. If they could, we’d see such a service now, but we don’t.

Why can’t they do it? Because it would require the dozens, if not hundreds, of networks that a packet traverses in its travels from sender to recipient to agree to respect the same prioritization scheme. Even if we assumed that we could re-engineer the internet to accomplish this, you’d have to deal with users disguising low-priority packets as high-priority ones. And because a central arbiter would likely be necessary, you’d probably lose some of the unplanned nature of the internet that makes it so wonderful.

The first observation about Content Delivery Networks is spot-on: net neutrality advocates give them a pass, and then go on to condemn ISP practices that prioritize within the transit portion of the Internet as well as within the last mile. As the two techniques produce similar effects on non-privileged traffic, the disconnect is quite stunning: net neutrality is supposed to forbid source-based discrimination, but the disparate treatment of CDNs and ISPs is nothing but source-based discrimination.

Jerry’s question about how to implement an Internet-wide prioritization system is actually the easy part: the technical means uses DiffServ (RFC 2474, 2475, etc.) or MPLS, both Internet Standards. Some use of BGP Community Attributes to advertise priority routes would also be helpful. These tools are used today in limited circumstances to provide inter-domain QoS. The harder part is that a number of peering and transit agreements would need to be amended in order to make a system of paid QoS effective. But the overall scope of that change would be less than the scope of the transition from IPv4 to IPv6 that’s currently underway. IPv6 will cause massive problems with Internet routing as it currently stands, because IPv6 routing is overly flexible. Organizations can use Provider-Independent addressing in IPv6, which is supposed to provide them with portability across ISPs (you’ll be able to switch ISPs without re-numbering your systems), but the cost to the Internet routing system is that routes will proliferate. Addresses will not provide any clue about how to reach the transit network, and there will be no way to aggregate routes to firms that share a common ISP. That means the 300,000 routes we currently have will blow up by an order of magnitude, more or less. This is a problem for many reasons.

A very interesting part of Jerry’s argument is that as the Internet is a best-efforts network, it must be impossible to prioritize across it. That leads to this speculation about changing the “dozens or hundreds of networks a packet traverses in its travels from sender to recipient.” In fact, the typical Internet packet crosses about 18 hops between source and destination, but only three or four networks on average.

A lot of people have the idea that the Internet is some sort of warm and fuzzy cloud of altruism in which people carry packets as a public service; Jonathan Zittrain promotes the idea that it’s like passing hot dogs down the line at a baseball game. Policy people seem to think that ISPs all carry packets to this amorphous Internet, and it magically gets them to other ISPs through a technical and economic process too mystical to worry about. According to this notion, when a Verizon customer in Boston sends a packet to an AT&T customer in California, a completely unrelated group of organizations carry the packet without any economic interest in it. So the prioritization scheme would need to be endorsed by all of the genies “in the Internet” or it wouldn’t work.

This is wrong, actually. When the VZ customer is sending to ATT, all VZ (his origin network) has to do is get the packet to the nearest edge of the ATT network, and ATT will take it from there without magic genies. These two large networks peer with each other, so in most cases no other network is involved. In some instances, VZ may have to use a transit network to reach ATT, or ATT may have to use a transit network to reach VZ, but even in that case, the transit network would be doing work-for-hire for a paying customer and would therefore be bound to carry out the job as they’re told (and paid) to do. When a network that’s part of the Internet wants to communicate with another network, it carries a packet to the other network or pays someone to carry it for them.

Ultimately, ISPs would only be able to offer end-to-end prioritization when interconnecting with other networks that have agreed to honor their system, which is where the peering and transit agreements come in. For the system to be 100% effective, all the ISPs large and small would either need to buy in or to be so massively over-provisioned all the time that it was all a non-issue. Everyone with a wireless footprint would buy in. But one of the great lessons of the Internet is that a system doesn’t need to be 100% effective to be useful: if only the 10 largest ISPs in North America agreed to honor QoS markings for priority treatment, the system would be good enough for most people to use most of the time; the rest would come on board quickly anyhow. If the ISPs only honored QoS inside their own networks, it would still be useful to their peers.

This brings us to the question of whether the design of the Internet as a “best efforts network” precludes “better than best efforts” treatment of some packets. The people who’ve been trying to explain the Internet since the 1980s have been a little lax on terminology, so the term “best efforts network” means at least two different things: Internet people use it to mean a more or less top-heavy system in which lower system levels are not trusted to reliably deliver information. TCP assumes that IP may deliver packet streams out of order, with missing or damaged packets, which need to be corrected by TCP itself. Back in the early days of packet switched networks, there was a fad on for building reliable systems out of unreliable parts, and that’s where this notion comes from. The end-to-end people at MIT also had the peculiar notion that building application-specific features into the lower parts of a distributed system would necessarily degrade performance for all users of the system, but that too is an artifact of history now that chips are so capable.

In reality, the Internet is not a best efforts network in the strict sense of the word, where best efforts means an absence of Quality of Service features. Strictly speaking, the Internet Protocol deals with packet routing, not with packet delivery. In its entire life, IP has never delivered a single packet. All it does is find a route for a packet, and then place the packet into a queue for delivery by a Link Layer function. The Link Layer (and its underlying Physical Layer function) actually deliver the packet, so any decision about whether to go Best Effort, Better than Best Effort, or Cheaper than Best Effort is fundamentally a Link Layer decision. The nature of Link Layer protocols is out of scope for Internet engineering, following end to end principles or not. (Link Layers are things like Ethernet, Wi-Fi, 3G, Wi-Max, and that sort of thing. They’re all capable of various priorities by design.) The only things that IP can do is re-order packets in a queue or segregate routes, but there are better ways to do QoS.

Because IP has the ability to request priority treatment from a Link function via DiffServ and MPLS, the Internet is not a “best efforts network” in the strict sense; it’s an internetwork that can use whatever capabilities its underlying networks provide.

Therefore, notion that the Internet is “a best efforts network” is essentially a myth, albeit one with a lot of currency. Taking this as a claim, it may be saying two things, and it’s rarely clear which one is intended in any given usage. Does it mean that the Internet is a best efforts (single transit level) network by design, or that it is operated as a single service level network by convention?

It’s clearly not single service by design, as the IP header has carried Type of Service flags from the beginning, and subsequent RFCs such as 2474, 2475, et seq., have refined the usage of the flags. IP sets the flags to select a service from the Link Layer, and the flags are set by applications using the standard programming interface to Internet services, the Sockets API.

Is the Internet a best efforts network in terms of operation and convention? For the most part, yes, but not absolutely or inevitably. It’s fundamentally a question of utility. If everyone is using the same application, there’s no need for service differentiation as all packets have the same requirement. For most of the time the public Internet has been in operation, there has been a predominant application, the Web. The non-web applications that have been able to thrive on the Internet are those that can co-exist with the web, such as VoIP. Given enough bandwidth to support web use, VoIP works well most of the time, e-mail is happy, and ftp works fine.

Video streaming is another non-web application, and we find that it needs a priority boost to be successful. But video streaming – where the content is canned – gets its priority boost from edge caching services such as Akamai. While edge caches don’t mark packets “high priority” and ask ISPs to honor their marking, they accelerate content by locating it close to the consumer. This has exactly the same effect as moving a priority packet to the head of a transmit queue does. CDNs are high-priority services, that’s why people pay to use them.

High-priority delivery for video conferencing services can only be accomplished by direct ISP action, however. You can’t cache a video call. This is an emerging future application, but the present regulations can either promote it or kill it altogether. So even if we believe that the Internet is a best efforts network today and always has been, that doesn’t argue that it should only be a best efforts network in the future.

But Jerry argues that it can never be anything but a best efforts network. I disagree with this judgment, which is primarily a technical claim. In Jerry’s defense, I don’t think he meant “best efforts” in this strict sense, but in the more general sense that the Internet concentrates intelligence at the top and assumes nothing or close to nothing from the bottom.

Yes, TCP does error recovery, serialization, and rate control. But there is enough intelligence underneath the Internet these days (in MPLS and optical Ethernet) to ensure high reliability and to allow traffic shaping. The Internet is actually more like an application than a network. The people who run around talking about stupid networks don’t understand the technologies that carry IP datagrams, do the routing, and cross the oceans.

Priority services are available to commercial users, and are widely used. CDNs are the best type of prioritization for the home user, but other kinds are coming.

The issue of priority transit is a crucial element of the debate between the Open Internet and net neutrality, so it’s important to get it right, despite all the semantic challenges.

Comments
  • Broadband Politics | Premium Services

    […] my post at High Tech Forum, A Question of Priorities on a discussion Jerry Brito of Mercatus started yesterday: A very interesting part of Jerry’s […]

  • » Blog Archive » Net Neutrality

    […] position on Net Neutrality. This information and the attached link to a technical discussion (click here for link) are provided in an effort to be helpful to policymakers considering these […]

Comments are closed.