Internet Congestion 101

One of the questions that comes up most persistently in discussions of the Internet is congestion. Traffic management by ISPs is a response to congestion, and naive critics of the practice often complain that the correct response to congestion isn’t management but simply adding capacity. Chris Marsden, an English network policy thinker, expresses this view on his Net neutrality in Europe blog today in connection with the pending inquiry by Ofcom, the UK’s version of the FCC, into traffic management. Marsden begins with one of Ofcom’s questions:

i) How enduring do you think congestion problems are likely to be on different networks and for different players?

And provides the following answer:

On mobile, there is likely to be a more or less permanent attempt by the oligopolistic networks to block rival applications, especially streamed video, voice and peer-to-peer file-sharing. On fixed networks, the cable network should be able to provide backhaul at reasonable levels with adequate backhaul investments. Its current policies intend to throttle for 10 hours in the 10am-9pm period which is not promising.

While this response is pretty much an exercise in conspiracy theory, there’s an interesting – and false – assumption buried in it about how the Internet works that’s more instructive than the stylistic flourishes. Marsden assumes that there is some level of backhaul bandwidth which is “reasonable”, presumably measurable by consumers and regulators, such that any ISP who simply pays the price for this “reasonable” quantity of bandwidth can forswear from managing packets, flows, sessions, and applications. This kind of thinking would appear sensible to anyone grounded in the culture of narrowband telephony: a telephone circuit consumes a fixed quantity of bandwidth (4 KHz) while it’s in use, so a competently-designed telephone network simply needs enough internal capacity to allocate 4 KHz to each of the calls that are likely to be active at any given time. Telephone network engineers have devised a unit of capacity called the “Erlang” that predicts the maximum number of calls that a given population will make at the same time. They provision the correct Erlang capacity and declare victory, with no need for active management, problem solved.

Marsden assumes that the broadband networks at the edge of the Internet simply need to be provisioned with the right number of “Internet Erlangs” and there will be no need to manage them either.   It’s important to understand why this is not the case.

An Erlang formula (there are several, actually) looks something like this:

Offered traffic (in erlangs) is related to the call arrival rate, λ, and the average call-holding time, h, by:

E = λh

provided that h and λ are expressed using the same units of time (seconds and calls per second, or minutes and calls per minute).

The fundamental units are time and calls, where each call consumes the same bandwidth, 4KHz. So bandwidth – the real resource we’re trying to manage – is indirectly managed by the way it’s assigned to calls. If one caller is silent and another is trying to send a fax, the network assigns the same bandwidth to each, even though their demands are radically different moment by moment. That’s the nature of the technology the the telephone network is built on, circuit switching.

The Internet is built on a different technology, packet switching, which enables the network to manage bandwidth dynamically. If the telephone network were packet switched, the fax caller in our example would borrow bandwidth from the silent caller and complete his transfer faster. As long as the silent caller isn’t perturbed, everyone’s a winner: the job that’s speed-sensitive is faster, the silent caller is happy, and the network operator isn’t spending money for capacity that can’t be used.

The goal of packet-switching is actually to expose congestion to the user in order for the operator to use all the capacity that’s built into the network. We want packet switching networks to run at 100% utilization, in other words, not at the 20% level that’s typical of telephone networks at off-peak times.

In my next post, I’ll explain how congestion interacts with applications on the Internet.

Comments
  • Chris Marsden

    Hi Richard – glad to see that my blog is still being read – and you’re right to focus on the word ‘reasonable’ but the clue is in the references to throttling. In consumer transparency terms, ‘reasonable’ traffic management may be defined by the national regulator e.g. Ofcom. What is possibly going to emerge from the ongoing EC and BEREC (pan-European) debate is that fixed networks may throttle short-term and for limited hours but not into the indefinite future for time periods such as those I indicate. The technical details should of course be left to the technical experts, but regulators will be required to ensure that reasonable consumer expectations are met.
    Re. conspiracy theories and European mobile cartels, yes you’re right that I and the European Commission and many national regulators have become rather jaundiced by their tactics over pricing and latterly throttling. That’s more than theoretical!

  • […] of the nature of congestion on packet switched networks such as the Internet. In the first part, Internet Congestion 101, we looked the at an idea expressed on Chris Marsden’s blog regarding the assumption of a […]

Comments are closed.