Atkins & Weiser “Third Way” Paper, Isenberg Responds, and My Own Response

A few months back, Robert Atkins and Phil Weiser wrote this paper called a A “Third Way” on Network Neutrality. I recommend reaing the paper, but to summarize: The paper asserts that the NN debate has polarized between the telcos & cable cos, who want an unlimited right to control traffic, and the pro-net neutrality advocates, who want all packets treated equally by the network operator. Atkins & Weiser see this polarization as obscruing the fact that both sides of the debate raise legitimate concerns about market abuse and investment in networks on the one hand, and about government intrusiveness into network management on the other.

Atkins & Weiser therefore recommend an approach they believe addresses both sets of legitimate concerns. Congress should permit network operators to have considerable discretion with tiering — including favoring content based on origin as well as by nature of service. However, to protect consumers from abuses of market power, network operators must (A) fully disclose which packets are favored and why. In this way, consumers can ascertain readily if their lousy connection with mediastreamerA and great connection with mediastreamerB is a consequence of mediastreamerA having a bad service or their ISP cutting a deal with mediastreamerB; (B) Congress should affirm the FCC’s responsibility to monitor the broadband ISP market for anticompetitive abuses and permit the FCC to resolve any abusive practice that may emerge either by adjuidcation or by rule; and (C) the government should provide other incentives — such as tax credits or subsidies — to facilitate broadband deployment.

Recently Dave Isenberg wrote a a strong critique of the paper. Isenberg chastises Atkins and Weiser for falling into what I shall characterize as the attractive trap of the apparently “reasonable compromise.” Isenberg argues that, on the one hand, Atkins and Weiser lack vision. They fail to appreciation of the revolutionary aspects of the internet and the damage to the power of the internet as a disruptive technology if broadband network providers can exercise the kind of control over content and services that Atkins & Weiser would permit under traditional antitrust analysis. On the other hand, Isenberg maintains that Atkins & Weiser fail to appreciate the “Realpolitik” problems of relying on the FCC for enforcement instead of enacting a prophylactic, self-executing rule. Given the potential for agency capture and the length of time it will take the agency to act, a rule which does nothing but set up the FCC as a watch dog with discretion is worse than useless. Only by prohibitting tiering and requiring network neutrality can save the power of the internet as a disruptive technology capable of challenging the core businesses (such as video and voice) of the network providers themselves.

About a month ago, Phil Weiser and I debated this point over on the Public Knowledge policy blog. You can see our back and forth here: Phil’s first post, my response, Phil’s reply to me (with my reply in the comments), and Phil’s final summation.

As folks might imagine, I tend to side with Dave Isenberg on this one, although I recommend the Atkins & Weiser paper to folks interested in alternative views. Atkins and Weiser are no industry shills or ideological Neocons refusing to recognize the potential dangers. And, as I have always said, anyone who wants to formulate real policy rather than foster religious ideology needs to consider other views and recognize where someone else has a valid point. I don’t agree with Atkins & Weiser (for reasons I’ve covered at length in the links and elsewhere), but I’m glad to have considered what they had to say.

Stay tuned . . . .

4 Comments

  1. Having not read the paper – one thought occurs to me at once from your summary – what about the consumer? He got lost.

    One need look no further than Medicare to see that too many confusing choices are as bad or worse for the consumer than no choices at all.

    More entertainingly – the consumer may well wish to make dynamic choices in content access – which right now is (or should be) relatively frictionless. If I decide to stop reading The New York Times online, and read The Washington Post instead – it’s almost effortless.

    Under a tiered structure, it may turn out to be a terribly difficult change to make – if WaPo loads slowly over my last-mile access, and NYT does not.

    And anyone who has waited for connection services at home (yes, Harold, I do read WetMachine…) knows that the friction and cost of switching providers for the last mile can be huge.

    As summarized, the paper has all the real world utility of a fish out of water.

  2. well yeah, but that’s why I included a link to the paper. It is quite readable as these things go.

  3. Harold,

    Again, not having had time to get to the paper(s) yet, the idea of a “third way” prompts the follwing idea in my head:

    How would you feel about allowing some sorts of tiering in order to ameliorate congestion (by giving priority to real-time data *types*, but not differentiating data *sources*), but *disallowing premium charges* for such steps?

    You’ve said in the past that Whitacre Tiering would create the incentive to maintain bandwidth scarcity in order to maintain the value of premium pricing, and this still seems right to me. So the idea is to detach the tiering from the pricing incentive (and to implement tiering on a data type basis rather than a data source basis).

    Thus, the incentive to relieve congestion is still maintained, but packet differentiation is retained as one of a constellation of tools available to address it. (And of course, some people will make the argument that raw infrastructure build-out is still the most efficient way to relieve congestion, but specific circumstances may warrant individual evaluations of this in the short-to-mid term).

    That said, I’ve heard arguments against tiering to relieve congestion because new real-time data protocols might be systematically disadvantaged in this environment, leading to systematic stifling of innovation that involves new data protocols. But perhaps there might be a way around this (if tiering were based on some identifier of real-time data that could be self-selected by a technology innovator).

    I’m not completely certain about this stuff, but it occurred to me it might be food for thought. Would be interested to hear your reaction.

  4. <i>How would you feel about allowing some sorts of tiering in order to ameliorate congestion (by giving priority to real-time data *types*, but not differentiating data *sources*), but *disallowing premium charges* for such steps?</i>

    I think such steps are perfectly reasonable, for the reason you suggest. I agree that the mater needs some further, deeper investiagtion. But I have said on numerous occassions tht a system which allowed subscribers to manage their own traffic preferences, or one which favored applications that are less tolerant of lag than others, or other schemes that actually linked the pricing mechanism to the behavior that imposes the additional expense, is reasonable.

Comments are closed