FTC Gets a Chief Technologist

The Federal Trade Commission now has its first Chief Technologist:

Federal Trade Commission Chairman Jon Leibowitz today announced the appointment of Edward W. Felten as the agency’s first Chief Technologist. In his new position, Dr. Felten will advise the agency on evolving technology and policy issues.

Dr. Felten is a professor of computer science and public affairs and founding director of the Center for Information Technology Policy at Princeton University. He has served as a consultant to federal agencies, including the FTC, and departments of Justice and Defense, and has testified before Congress on a range of technology, computer security, and privacy issues. He is a fellow of the Association of Computing Machinery and recipient of the Scientific American 50 Award. Felten holds a Ph.D. in computer science and engineering from the University of Washington.

Dr. Felten’s research has focused on areas including computer security and privacy, especially relating to consumer products; technology law and policy; Internet software; intellectual property policy; and using technology to improve government.

New FTC Chief TechnologistEd is perhaps best known on the Internet for his Freedom to Tinker blog, which started as a personal thing and morphed into a group blog for the Princeton-based Center for Information Technology Policy which he runs. Freedom to Tinker has tended to emphasize electronic voting systems, privacy, and copyright, with an occasional plunge into net neutrality. The addition of the CITP folks has given it a Berkman Center feel, emphasizing a rights-based approach to networks and technology policy over a technology approach.

I had the privilege of discussing the Comcast network management case with Ed and some of the Technology Liberation Front bloggers a couple of years ago; the podcast is still up at TLF along with a transcript. The facts of the case weren’t well known when we had this discussion, so some of the conjecture – mine included – was wrong, but the podcast is a fair example of Ed’s approach to network policy. He’s very deliberate and strives for a moderate position, but I think it’s fair to say that the approach is more policy-based than purely technical. For example, Ed’s description of Jacobson’s Algorithm is a bit effusive:

Well, the usual way of dealing with a congested network is to have the network drop individual packets of data. When you communicate across the Internet, the data that you sent is divided up into packets. The usual thing that happens in congestion is that individual packets get lost or dropped when the network in the middle is overwhelmed, and the hosts at the endpoints that are communicating recognize the dropped packets and say, okay, the network must be congested, so let’s slow down. And there’s a really exquisitely engineered mechanism by which the end hosts can react to congestion on the Internet. But all that relies on the network in the middle responding to congestion in a certain way–by dropping packets–whereas Comcast did something else.

In practice, the “exquisitely engineered” Jacobson’s Algorithm (AKA “the Jacobson patch”) is three lines of code; it simply divides the number of unacknowledged packets that a TCP sender can have outstanding at a time by two when there’s a packet loss, and increases the number by one when a packet is successfully acknowledged. It’s one the main reasons that we have Content Delivery Networks such as Akamai these days, because senders on shorter paths can increase their transmission rate faster than senders on longer paths. Jacobson doesn’t provide much help in managing congestion caused by excessive numbers of TCP connections, as I explain in the podcast. Former FCC Chief Technologist Jon Peha expressed a similar faith in the way that Jacobson interacts with large numbers of TCP sessions at the FCC’s second Comcast hearing, incidentally; he wasn’t the Chief Technologist yet, but he was appointed soon after. The Comcast system didn’t prevent Jacobson from working, it supplemented it with a system that operated across multiple TCP connections; Jacobson can’t do that. This is an important point because it speaks to the question of whether the Internet is so well engineered that it manages itself; the answer is no, in part because so many applications circumvent what Chuck Jackson calls the “gentleman’s agreement” implicit in Jacobson, but for other reasons as well.

The intent of the Jacobson Algorithm is to slow down the rate at which one program offers traffic to the network; when the application is using only one TCP connection, it works well. But when an application is using multiple connections, slowing one connection has the effect of speeding-up the application’s other connections, so the application isn’t effectively throttled. That’s the problem with P2P in a nutshell.

The reason that BitTorrent caused so much trouble for the ISPs is that their provisioning assumptions presumed small numbers of sessions effectively managed by Jacobson, and BitTorrent’s upstream behavior was something they’d never seen before. As they dropped packets, freed-up bandwidth was consumed by P2P instead of by Web surfing; oops. It turned out that the Comcast system was an attempt to ensure upstream bandwidth would always be available for non-BitTorrent applications, but they applied it at all hours of the day and night, which was overkill. They also dropped that system in favor of a more “application neutral” system as soon as the ordered equipment arrived and the DOCSIS 3.0 transition was under way.

It’s interesting that the FTC has decided they need the services of Chief Technologist after doing so well without one for such a long time. I suspect that this is motivated by the desire of the agency to assert itself as a candidate to take over some parts of Internet regulation. Unlike the FCC, which needs authorization from Congress to address Internet Service Providers since the DC Circuit struck down their Comcast order, the FTC has a fairly open-ended mandate to police monopolistic and deceptive trade practices independent of technologies. To regulate ISPs, they need to understand what they’re doing, and Ed can most likely build a group of experts capable of examining technical practices for transparency. Assigning more Internet oversight to the FTC puts a lot of the net neutrality authority questions to bed, in other words. But his mandate isn’t nearly as much about net neutrality as about privacy, copyright, and advertising. I believe he’s going to be very busy in these areas, and very good.

You can never have too many technologists involved in network policy, so I’m happy to see Ed Felten go to the FTC. Let’s hope he’s not corrupted by the policy wonks [Editor’s comment: relax, that’s a joke, some of my best friends are policy wonks] and is able to spread truth and light throughout the agency.