A New Way to Look at Net Neutrality

NETN_shutterstock_174614378New America Foundation’s Open Technology Institute is circulating a proposal by consultant CTC Technology and Energy titled “Mobile Broadband Networks Can Manage Congestion While Abiding By Open Internet Principles” that has apparently captured the imagination of many net neutrality advocates. It relates to FCC chairman Wheeler’s desire to impose a common set of regulations on wired and mobile networks, with some small exceptions for required network management.

The proposal does a couple of interesting things, one of concern to net neutrality advocates and the other quite worrisome to mobile network engineering. The net neutrality issue is its complete departure from the “all packets must be treated equally” rule proposed by Cardozo Law School professor (and former White House advisor) Susan Crawford other net neutrality absolutists. Lest we forget, Crawford defines “discrimination” as any unequal treatment, justified or otherwise:

By “discrimination,” I mean allowing network access providers to treat some traffic or some users differently…A non-discriminatory network provider will send each packet on its way…without prioritizing any packet…Packets are treated by their routers on a first-in, first-out basis. This best efforts, non-discriminatory approach has worked very well so far. Although a telco provider may feel that the internet is broken because it cannot guarantee particular levels of service, there is excess capacity on the internet and packets, by and large, are not dropped. And even if they were, for most uses and most users a few dropped packets would not appreciably diminish the internet experience and with increasing bandwidth all packets will travel faster through last-mile networks.

Sensibly, CTC sensibly rejects this extreme view. It is also at odds with the position of Stanford Law School professor Barbara Van Schewick, who argues against application-specific constructs in the network core:

An architecture can deviate from the broad version of the end-to-end arguments along two dimensions: it can become more “ opaque ” by implementing more application-specific functionality in the network’s core, or it can become more “controllable” by increasing network providers’ ability to control applications and content on their networks…The more application-specific functionality the network’s core contains, the more likely it is that new applications cannot be deployed without first making changes to the network. Whereas an architecture based on the broad version of the end-to-end arguments results in a network that is transparent for new applications, changes along this dimension make the network more opaque. (Internet Architecture and Innovation, pages 286-7)

CTC rejects Crawford’s understanding of net neutrality by replacing the concept of equal treatment for all packets in favor of equal access to unequal treatment. This is such a major departure that it can’t be glossed over. CTC wants to discard the actual idea of net neutrality while retaining its name.

CTC also rejects Van Schewick’s prohibition of application-specific functionality inside the network’s core in favor of (presumably) equal access to a method for placing such functionality in the network through a new engineering process overseen by the FCC. So no matter how you’ve been defining net neutrality, CTC lays waste to your idea.

CTC wants to create “Generic Quality of Service Profiles” in a standards body blessed by the FCC, have mobile networks implement the profiles, have the FCC create an equal access mechanism for them, verify their correct operation, and then upgrade, revise, and enhance them periodically. Thus, application-specific functionality would appear in network cores and packets would no longer be treated equally.

The process has five steps, which CTC defines in the following way:

  1. Industry standards bodies or another industrywide process approved by the FCC create generic QoS profiles related to latency sensitivity or other attributes that need similar QoS treatment, and make them open to all like applications, such as toll-quality voice and video communications.
  2. Wireless carriers define the type of network management each profile will receive, understanding that the management may be dynamic and complex, but that all like applications within the profile will receive the same treatment.
  3. The FCC or standards bodies create a streamlined process through which edge providers can identify their content and applications to the wireless carriers for treatment according to a QoS profile.
  4. The FCC or standards bodies create a process, such as periodic audit of active QoS rules, to transparently verify that the defined management structure is being implemented consistently.
  5. The FCC or standards bodies revisit the profiles regularly, and revisit the need for QoS and prioritization as spectrum efficiency increases and other technological improvements enter the marketplace.

This procedure not only transforms net neutrality into net non-neutrality and end-to-end networking into core-centered networking, it transforms the FCC from a regulator enacting the will of Congress as expressed in the law into something more akin to a product developer. I say this with no sarcasm; this five-step procedure simply mirrors the process by which networking product development firms do business:

  • Someone has an idea;
  • He or she lobbies the idea through a standards body;
  • The firm implements the standard;
  • The firm verifies its correct operation; and
  • The firm upgrades, revises and enhances the product.

So this looks a lot more like product development than regulation. The CTC proposal raises a number of troubling questions at the level of implementation:

  1. Is the FCC capable of engaging in engineering at this level, where outcomes are highly speculative and “consistent implementation” is hard to define?
  2. Why hasn’t the mobile networking standards body, 3GPP, already defined “Generic QoS profiles” if they’re so easy, and why has it taken so long to figure out Voice over LTE?
  3. How long does it take to create a new profile?
  4. What affect does a new profile have on applications using the old profiles?
  5. How is a profile validated?
  6. How are inter-profile interactions characterized and quantified?
  7. Is the FCC up for creating application program interfaces (APIs) for programmatic access to Generic QoS profiles?
  8. Is the FCC capable of auditing APIs for generic QoS profiles?
  9. What does the budget look like for creating and auditing these mechanisms?
  10. Does the law authorize the FCC to engage in any of these activities?
  11. Will this make mobile networks better platforms than they would become through the natural process of innovation and evolution?
  12. What about charging?

The general approach that CTC takes is lacking in two key respects: in the first place, it assumes essentially static conditions on the wireless network, and is completely devoid of a mechanism to provide feedback to the application as things change. CTC says their mechanism simply entails the application telling the network what it wants, and the network treating the applications packets the way they want to be treated.

This is the general engineering approach for applications and wired networks, but it’s not appropriate for networks that accommodate a variety of data rates and roaming, as well as the general variability of applications starting and stopping. QoS systems generally have an “admission control” element that prevents over-subscription of network services. The application tells the network what it wants, and the network tells the application whether it can provide that level of service. CTC partially recognizes this:

For example, the wireless carrier may set the QoS for toll-quality voice with QCI=1, Minimum Guaranteed Bit Rate = 0.080 Mbps, and Allocation and Retention Priority (ARP) = 7, for the first 50 users in the sector. The wireless carrier may also set up entirely dynamic QoS that changes from this QoS to other settings in various conditions, such as areas where less spectrum is available, where there is congestion, or where interference is detected. And the wireless carrier may provide different QoS to the next 50 users trying to use this profile. However, the key thing is that the toll-quality voice profile would stay the same for all application providers using the toll-quality voice profile. (page 25).

The first problem here is the Minimum Guaranteed Bit Rate, a feature borrowed from wireline networking. When mobile apps negotiate with networks for service levels, they specify a range of bit rates over which they can operate and forecast their usage. This is done in Wi-Fi with the “Add Traffic Specification” (ADDTS) Management Action frame expressing requirements in a construct known as a “TSPEC”. As the diagram shows, this frame communicates 15 parameters, everything from Element ID to Surplus Bandwidth Allowance:

IEEE 802.11e (Wi-Fi) Add TSPEC Frame

These 15 parameters tell the network how much data the application expects to send, the size and frequency of its packets, the organization of its packets in terms of bursts, and range of data rates over which it can operate, the delay it can tolerate, and the extra bandwidth it needs to reserve to allow for roaming. This action is repeated every time the device roams. Wi-Fi is a fairly simple system that anyone can administer, and its elements don’t coordinate with each other the way the cellular network does, so the 15 parameters Wi-Fi exchanges are table stakes for networks that allow roaming.

Responding to a request for QoS from a device doesn’t mean the Wi-Fi access point simply says “yes” or “no”; it can make a counter-offer to the device with the parameters it is capable of providing. Thus, it can say “yes”, “no”, or “yes, but”, replying with an alternate set of parameters in the final case. Then the application has to decide whether the counter-offer is acceptable to it. In the case where the response is acceptable, the network’s TSPEC governs the transaction rather than the device’s original TSPEC. The Wi-Fi system doesn’t have a charging element, which would be very important in a real network dealing with a constant level of congestion.

Moreover, the CTC system lacks the continual feedback between application and network that takes place in Wi-Fi Multimedia (WMM), the system that uses QoS reservation on unlicensed networks.

We know from traffic studies that most mobile networks are moderately congested for hours at a time; average data rates for mobile in the US are about a third of peak data rates because of limited spectrum and the cost and red tape involved in deploying new cell towers.

Mobile-Broadband-Speed-300x160

G7 Mobile Broadband Speed, Q1 2014
Source: Akamai State of the Internet

Mobile networks in other G7 nations are even more congested, with peak-to-average ratios ranging from 3.4 in Canada to 8.3 in Japan. When resources are continually scarce, they’re best allocated when charging is part of the allocation system. So for all its complexity, the CTC system is not nearly as complex and dynamic as it would need to be in a real network, and it’s missing the most vital elements, charging and roaming support. It appears that the CTC system was put together with no more knowledge of wireless dynamics than the authors could gain by reading Release 9 of the LTE specifications, a five-year-old document.

I appreciate their admission that the traditional forms of network neutrality are counter-productive, but I have to say their system falls far short of requirements and asks the FCC to do things it’s not well equipped to do. And no, I’m not proposing that the FCC should force cellular networks to become more like Wi-Fi. There are good reasons why these two networks are engineered very  differently.

What I am saying is that imposing the “strongest possible regulations” on mobile networks on the basis of what’s essentially a guess by a rather poorly informed consultant that it’s easy to graft some additional baggage on mobile networks to give them something like neutrality without tossing aside any potential ability to support diverse applications in the future is a risky bet that the US doesn’t need to make. It may be possible that something like “Generic QoS Profiles” for mobile networks can be developed in the future, and it may not be possible. We won’t know until it’s been tried and proved, and we certainly can’t take the opinion of the authors of this grotesquely over-simplified plan as gospel. This plan isn’t engineering, it’s wishful thinking, and toward what end?

Bear in mind that OTI’s previous work on wireless networks has consisted of nothing more than insisting on allocating more spectrum to unlicensed systems than to licensed ones, so they’ve helped create our current spectrum crunch. You’d think that such great fans of Wi-Fi would at least understand how Wi-Fi works, but here they’re offering a QoS system that lacks the sophistication of the standard QoS mechanism for Wi-Fi and fails utterly to comprehend the fact that the Policy and Charging Rules Function of LTE includes Charging. It’s right there in the name, folks.

Here’s an alternate approach: leave QoS to the carriers but let them listen to requests from application developers for network features and functions they want. Allow carriers to compete with each other on the basis of their support for third-party apps, leveraging that they’re learning from their apps incubators. Allow competition to govern third party app support and see what happens. The mobile market is competitive, after all.

The results just might be surprising, and there’s eons of future time in which to apply new rules for mobile if they’re really needed. But  let’s make sure any new rules conform to the realities of LTE and are not simply aspirations and fantasies.