Comcast /BitTorrent Update: Important Filings by Topolski, Peha, and Ou (and some analysis by yr hmbl obdnt).

Most folks do not monitor the day-to-day filings in the Broadband Practices Notice of Inquiry, Docket No. 07-52, the proceeding which has become the open docket part of the regulatory discussion on Comcast’s treatment of p2p uploads. Lucky them. But the sifting of this endless stream of regulatory filings has yielded some rather important nuggets of gold in the last few weeks that deserve much greater attention for anyone who cares about the substance of the debate. As I discuss below, three recent filings deserve particular attention:

a) Robert Topolski demonstrates that Comcast blocks p2p uploads at a remarkably consistent rate, at any time of day or night when the test takes place, and regardless of the nature of the content uploaded. This is utterly inconsistent with Comcast’s stated position that it “delays” p2p traffic only during times of peak network congestion. Topolski adds some other interesting details as well.

b) Jon Peha, a Professor of electrical engineering and public policy at Carnegie Mellon, provides his own explanation why Comcast’s characterization of its “network management practice” as merely “delaying” p2p uploads and its claim that this practice is in accord with general industry practice is nonsense.

c) In defense of Comcast (or at least, in opposition to any government action to restrict the ability of ISPs to target p2p traffic specifically), George Ou filed this this piece on how bittorrent and other p2p applications exploit certain features of TCP, a critical part of the protocol suite that makes the internet possible. Ou argues that as a result of this feature of p2p, heavy users of these applications will always be able to seize the vast majority of available bandwidth on the system to the disadvantage of all other users. Accordingly, the FCC should acknowledge that it is a “reasonable network management” practice to target p2p applications specifically as opposed to heavy users or all applications generally.

My analysis of each filing, and something of a response to Ou, below . . . .

Last Time on “Comcast: The Misunderstood Consumer-Friendly Network Manager”

About two weeks ago Comcast and BitTorrent, Inc., announced they had reached an agreement to discuss how to handle p2p traffic to their (and therefore, of course, to everyone else’s) mutual satisfaction. Also unsurprisingly, those not happy with big bad government coming in and telling “industry” what to do rejoiced at this “private sector” solution that was happening all on its own without any government interference whatsover. Even those skeptical that this had nothing to do with the pending FCC complaint and skeptical of Comcast generally opted to reserve judgment on whether the announced agreement addressed anything relevant to the pending complaint against Comcast for its practice of targeting bittorent (the application, not the company BitTorrent, Inc.) and other p2p applications.

FCC Chairman Martin, however, remained unimpressed. In particular, he drew attention to the fact that Comcast appeared now to admit to practices it had previously denied, and that Comcast could give no clear date on when it would stop blocking and/or degrading p2p applications. Two days later, Comcast Executive VP David Cohen sent this letter, in which he accused Kevin Martin of being a Comcast playa-hater who always got to be bringin’ Comcast down. Cohen went on to say that (a) Comcast repeats that it only occasionally, only during periods of peak congestion, and only in the most non-intrusive way possible, “delays” uploads; (b) Comcast is not “arbitrary” or “discriminatory” in these practices; and (c) Comcast needs to wait until the end of the year, and provide customers with plenty of notice and preparation rather than “put our customers at risk of network congestion.” Cohen concluded by telling Martin that everyone else looooves the the Comcast/BitTorrent deal (well, anyone who matters anyway), Martin is just hatin’ on cable, and they will no doubt continue to whine about how mean Martin is to their wholly owned subsidiaries in Congress.

The Topolski Filing

This proved too much for Robert Topolski, whose initial investigation on Comcast’s “network management” practices vis-a-vis bittorrent and other p2p applications got the ball rolling in the first place. Topolski sent his own letter to the FCC, in which he offers a point-by-point rebuttal of Cohen. I highly recommend reading the thing for youself, nut I will hit a few critical headlines here:

a) For all tests conducted by Topolski since he began testing until February 20, 2008, Comcast blocked p2p uploads using Gnutella 100% of the time, blocked ED2K uploads approximately 75% of the time, and blocked bittorrent uploads approximately 40% of the time. These results remained consistent no matter what time of day or night Topolski conducted the test and without regard to the size of the file or the legality of the content. This result is inconsistent with Comcast’s claim that it only targets p2p uploads during periods of peak congestion, or that the “delay” of uploads is transient.

b) On February 20, 2008, Comcast apparently changed its network management practice. It ceased all interference with Gnutella or ED2K, but interference with bittorrent uploads increased to 75% of all attempted uploads. Again, this new result remains remarkably consistent. The rate of degradation for bittorent uploads remains approximately 70% no matter the content or time of day of the attempted uploads. This demonstrates an ongoing monitoring by Comcast of its network management decisions, and an ability to alter those practices when it so desires. It also raises questions about why Comcast would need to wait until the end of the year to implement a new network management system that did not “delay” p2p uploads, since Comcast apparently had no trouble implementing a radical change on February 20.

Perhaps more importantly, Topolski’s test results (which Topolski states were independently confirmed by EFF’s Peter’s Eckersley on the monitoring website NNSQUAD.ORG) demonstrate that Comcast is both capable of targeting specific p2p applications while ignoring others, and that it is doing so. In the absence of any other information, Topolski argues that this blocking of bittorrent is “arbitrary.” My personal fear is that it is not “arbitrary,” but that it is motivated for reasons related to Comcast’s business plans rather than for engineering reasons. But in the absence of evidence, I can’t prove anything. Hopefully, the FCC will look into this and ask (a) whether Comcast disputes Topolski’s allegations, and (b) if Topolski is correct, why does Comcast choose the specific applications that it “delays.”

c) Topolski includes statements from Comcast CTO Tony Werner that indicate that Comcast can manage its capacity on a dynamic basis by reclaiming channels from video delivery or by “virtual node splitting,” and that Comcast both anticipates network use surges and plans accordingly. If true, rather than merely being the sort of puffery common in the industry, it flatly contradicts Comcast’s repeated assertions that it suffers capacity constraints and that the only way Comcast can ensure that a handful of “bandwidth hogs” don’t destroy the network is by “delaying” p2p uploads.

The Peha Filing

Cohen’s scolding of Kevin Martin also prompted Carnegie Mellon Professor Jon Peha to explain why Comcast’s characterization of its practices does not wash. For those who love to play the “this is all heavy engineering and none of you policy wonks can ever really understand this stuff” card, I direct your attention to Mr. Peha’s engineering credentials: Ph.D in EE from Satndford University, IEEE Fellow, etc. For those who like to play the “academics don’t understand how this stuff works in the real world” card, I note that Peha has been CTO of three tech start ups and worked for SRI International, Bell Labs, MS, and the U.S. government.

Peha’s letter also provides a point-by-point rebuttal of the Comcast claims. Again, I will commend the entire letter to folks rather than attempt to summarize it all here. But his challenge to Comcast’s claim that its practice of targeting p2p applications (and, if Topolski is correct, very specific applications for very specific types of blocking and degradation), deserves reproduction here:

Comcast has made this assertion without evidence. I would never claim to know everything that happens in engineering circles, but I am unaware of any technical literature that has proposed that ISPs adopt this particular practice as a way of dealing with congestion, or to use this practice to address any other issues that might be important in the context of “network management.” The practice is known, but it is known in the security literature rather than the network management literature. The textbook name for this is a “man in the middle attack,” or MITM attack. It is therefore reasonable to ask whether the fCC should consider this approach as falling within the realm of network management at all, much less reasonable network management.

The Ou Filing

Finally, we come to George Ou’s filing on how p2p applications exploit certain aspects of TCP (a key element of the internet’s architecture) to “unfairly” (as Ou characterizes it in this article) capture bandwidth at the expense of other applications. As David Clarke touched on at the Boston FCC hearing, TCP has built in mechanisms for addressing congestion. As I understand Ou’s argument (and I trust he will once again correct me if he thinks I misrepresent his position), and to grossly oversimplify the engineering, p2p applications such as bittorrent are designed to open multiple TCP streams and thus avoid the congestion regulation mechanisms in TCP. This has the effect of allowing p2p applications that “blatantly exploit” this feature of TCP to circumvent the existing traffic controls at the expense of other users, grab up all the available bandwidth, and force all other users to endure TCP congestion delays.

Assuming Ou is correct (and I have not heard any engineering discussion to the contrary), Ou’s submission into the record provides an answer to one of the questions raised by Peha: how is it consistent with reasonable, non-arbitrary network management to target a specific type of application rather than address congestion generally? Ou argues that this is justified because p2p applications are different in a highly relevant way, and that only by targeting p2p applications (and, presumably, other applications that exploit the same features of TCP for similar effect) can Comcast (or other ISPs) address the problem. If ISPs merely address congestion generally, a handful of p2p users will continue to absorb a disproportionate amount of the resources.

Ou’s argument does not, of course, address Topolski’s observation that Comcast is not merely targeting p2p applications that exploit the TCP protocol, but is targeting very specific p2p applications with precision. But that issue is irrelevant to Ou’s real argument. Ou is not — as I understand it — defending any specific Comcast practice. He is arguing that the FCC must not prohibit ISPs from targeting specific applications like p2p because very real differences in the nature of the applications justifies this approach.

I am not an engineer, and will wait to see if anyone attempts to rebut Ou’s technical characterization. But assuming Ou is correct, I do not believe that his argument carries the day. The “arms race” between application providers and network providers is an old one, and makes itself felt in every level of network management. The question does not begin or end with engineering (a fact that causes much consternation to those who wish otherwise). As in any other policy debate — especially where critical infrastructure is involved — law and economics inform the technical decisions, the technology and the economics inform the choice of laws, and the technology and legal environment inform the business decisions.

For example, as I have observed in the past one approach to the network congestion issue is to impose explicit limits on users while giving users tools to better manage their bandwidth. The combination of giving users incentives to be more efficient and giving them to tools to do so may alleviate congestion in a way that — to my mind at least — is a lot less dangerous than giving ISPs carte blanche to go after applications.

Ou hates the idea of metered pricing, and considers those of us on the net neutrality side that suggest this as an alternative way of handling congestion hypocrites because metered pricing will kill the internet by chilling innovation and deter use.

As it happens, I agree with Ou about the dangers of metered pricing. But now we are no longer having a technical argument. We are debating among possible alternatives, both technically feasible, but with very different possible real world outcomes. Ou is quite willing to agree that metering internet usage is technically possible, and that it is in fact happening in the real world. He merely thinks it will be a disaster if we require ISPs to adopt this approach as the alternative to targeting p2p applications.

Maybe. Or maybe, faced with these choices, technical folks looking to make money will get clever. Heck, that’s how we got businesses like Akami, because folks wanted to move content faster and Akami offered one possible solution. That happened because folks at the edge of the network (and there are an awful lot of them, many of them both very clever and very greedy in that good free market sense of the word) offered solutions to these problems and FCC rules prevented the network operators from interfering with those solutions. Now, broadband ISPs like Comcast would rather offer a different solutions to the same problem of moving more stuff faster. Unsurprisingly, their solutions to the same problem are quite different, do not involve letting people at the edge decide without their consent, and produce very different worlds.

So no, I don’t ignore the engineering realities. That would be idiotic. But it is equally idiotic to pretend that one engineer’s perception of one technical aspect of a very complex problem is a show stopper to which all non-technical interests must yield. Just like there was never an internet in which all traffic moved at the same speed and money didn’t matter, there was never an internet where law and economics didn’t impact the technical solutions people chose to develop. Just ask anyone who used to waste “hundreds, if not thousands, of dollars” anytime we posted to Usenet at a time when no one was allowed to use “the backbone” for commerce. It would be equally nice if others could stop pretending this is just about engineering, and goes to the heart of what kind of world we want to live in and what kind of applications will be permitted — either by operation of public policy or by operation of the corporate policy — to evolve.

Stay tuned . . . .

This entry was posted in Spectrum, Tales of the Sausage Factory and tagged , , , , . Bookmark the permalink. Both comments and trackbacks are currently closed.

19 Comments

  1. barry payne_economist says:

    In the Ou filing in Docket 07-52 on the bottom of page 7, a key point is made about “weighted” bandwidth use.

    An example starts with bandwidth shared equally at a 1:1 ratio between Users A and B, after which increased P2P use by User A crowds out bandwidth to B under “normal TCP” by a ratio of 11:1, but then is restored to a ratio of 1:1 under “weighted TCP” by effectively compressing the bandwidth used by A.

    The essential question is, What bandwidth is User A “overusing” through P2P in the first place – its own, that of User B, both A and B, or none?

    If User A can technically bypass and exceed its administrative assigment of “up to” maximum bandwidth, including that of concurrent upstream and downstream use, then “weighting” A’s use to restrict it within that limit would be consistent with net neutral management.

    Otherwise, how much more A’s use exceeds B’s use is largely a matter of how much B uses. If B is an average Comcast customer, the use is small compared to the maximum bandwidth actually used by A, even though A and B are sold the same maximum bandwidth.

    So an unweighted TCP use ratio by User A to B of 11:1 would be “unusual” only because a high number compared to a low number yields a high ratio – A is using most or all of the existing bandwidth made available by Comcast and B is not, except of course for the bandwidth of A types that Comcast sabotages in secret.

    A third case of neutral management would force providers like Comcast to reveal uniform, explicit limits on all bandwidth availability in Mbs and total GB use, regardless of P2P or any other use.

    If weighted bandwidth use is necessary to restrict P2P or any other application that can technically bypass assignments of bandwidth, that would be a reason to enforce net neutral management – not a reason against net neutrality.

    Weighting bandwidth itself can be neutral or not, to manage the coincident and uniform impact of data packets which contribute to congestion regardless of content.

    Comcast avoids neutral management options in part because it would have to admit in their adoption that “A” users are, in effect, competing directly with it for the surplus of oversold bandwidth capacity to existing customers, which is currently disappearing through congestion.

    Comcast has used this policy to expand its customer base of “low users” through manipulative and deceptive marketing practices, while longer term objectives conveniently line up with overall content control.

    By arbitrarily identifying “A” type users as “causing” congestion in some technically, unique way, Comcast remains free <i>not</i> to apply neutral bandwidth weighting if and when appropriate, instead managing by content and application identity rather than neutral control of data packets.

    P2P itself and “seizure of the networks by a small number of users” is not the problem – arbitrarily declared overuse of unspecified bandwidth as the “cause” of congestion is the problem.

  2. Harold says:

    Barry:

    You are correct. It is all about the framing. I will refer back to a piece I wrote on this about 6 months ago called “Of Bandwidth Hogs, QoS, and Regulatory Chameleons.”
    http://www.wetmachine.com/t

  3. No, it’s not all about the framing, it’s about the terms under which residential broadband accounts are sold. Business broadband is typically sold on the basis of a “Committed Information Rate” or CIR, which is a minimum the user can expect to get at any time, with an option to go over the CIR if the bandwidth is available. Residential accounts are typical sold in a completely different way, with a cap that represents the peak rate, and with no guaranteed minimum. Hence, when the offered load of all users on any common facility exceeds capacity of the circuit, the carrier has to manage the bandwidth in some fashion.

    It is fallacious to imagine that this management has ever been left to TCP, because TCP isn’t able to allocate among customers. It doesn’t know anything about customers, or about users, or about IP addresses, it only knows about TCP flows, and these may be unequally distributed across the user population on the shared circuit in question.

    Hence carriers apply per-user fairness systems that equalize bandwidth among users when the total bandwidth requested exceeds the capacity of the link.

    That’s what keeps the Internet from collapsing, which is George Ou’s point.

    Toploski and Peha make false and misleading claims by focusing on particular flows instead of the overall fairness system on the shared circuit. Topoloski lacks the expertise to examine the link, and does not make any effort to do so. If he had, he would have offered some data about the actual level of congestion on the network when he ran his tests. As he offers no meaningful and simply asserts that he had problems at all hours of the day, you have to take or leave his claims on faith. I leave them because I have not been able to replicate them on my Comcast connection.

    Peha on the other hand has the expertise to know better, and simply chooses to mislead the FCC with his comments on Cohen’s follow-up. Peha misleads by focusing on the behavior of a single TCP stream terminated by the TCP RST command, and fails to mention that BitTorrent operates multiple streams concurrently and is constantly shifting its download target around a swarm of different uploaders. His remarks are purely political and lack any technical substance.

    Economic arguments are fine, but they serve mainly to illuminate technical findings. Toploski and Peha present distorted technical findings, which are not helpful in the overall quest for Truth, Justice, and the American Way.

    They’re no better than Chairman Martin’s vapid comments, frankly.

  4. Bill Clay says:

    I’ll concede from the outset that present company outranks me in technical, legal, and economic credentials. But as an IT practitioner since 1966, I cannot let Richard Bennett’s remarks go without comment — thanks in large part to Barry Payne’s insight (sorry if I didn’t pick up the same insight the first time in Harold’s earlier posts).

    No one disagrees that ISPs (even ones with better network architectures than Comcast) must limit bandwidth consumption under heavy load. This is a primary function of IP, a network-layer protocol. Any intermediate node (e.g., a router) may discard packets as necessary, preferably basing its choices upon information contained in the IP header; otherwise, at random.

    TCP is not a network-layer protocol; it’s a session-layer protocol (sort of; IP and TCP predate the ISO 7-layer model). If network service providers examine TCP-layer characteristics to determine which packets to discard, they are violating the bedrock design principle of all modern network protocols: layered separation of function. And they need not do so: they can do their job just fine without violating this most fundamental of communication protocol principles.

    Mr. Bennett’s assertions that the use of TCP RSTs by network service providers is legitimized by use of the same technique in security devices like intrusion prevention systems is a red herring. Such devices are generally operated by end-users on the network edge. End-users “own” their TCP sessions and have every right to interfere with them as they see fit.

    Even if this practice is carried out by network service providers at the behest of law-enforcement authorities (under court order, please), that is a legally sanctioned intrusion on citizens’ privacy and is consistent with centuries of practice in the non-digital world.

    “Network management” is not a legitimate justification for such intrusions nor for violation of the fundamental structure of TCP/IP. There is simply no technical requirement for Comcast’s behavior — especially not to enforce “per-user” bandwidth ceilings, the justification offered by Mr. Bennett. Faithful implementation of the IP protocol gives Comcast all the flexibility they need to limit subscribers to any per-user bandwidth ceiling they wish to enforce.

    There is only one “technical” justification for such behavior: to favor or hinder specific applications end-user behaviors to non-technical ends, such as marketing plans and strategic business alliances.

  5. Harold says:

    Richard:

    I believe you are unfair to Peha. He rather explicitly makes the point that this technique is usually a matter of network security rather than network management — which is precisely why he questions Comcast’s claims. You may disagree, but I think you ave no basis to characterize Peha as disingenuous anymore than you are disingenuous merely because you take a contrary position.

    OTOH, I have a question that has increasingly bothered me as the argument has shifted to the nature of TCP. If this IS a TCP issue, why isn’t the appropriate fora for resolving this the IETF? Would it not have been far more appropriate for whoever at Comcast proposed this as a solution to have used the IETF processes? While not mandatory, it is certainly the way in which TCP initially developed, starting (I think) with RFC 793.

  6. Bill, I disagree with everything you say.

    * TCP is a transport protocol, not a session protocol. The Internet’s lack of a universal session protocol is a big part of the problem, because session protocols operate per-user.

    * The examination of TCP packets has always been done by intermediate routers because discarding TCP ACKs is pathological for congestion relief. It’s not an “intrusion”, it’s standard practice.

    * The fairness problem arises in large part because some users have several times more active TCP streams going than others; RST reduces the number of streams and can be used either fairly or unfairly depending on the overall mix of traffic.

    Harold, the issue of TCP unfairness has been under discussion in the IETF for many years, and a number of proposals heard and adopted to deal with it. However, the end-to-end nature of TCP is a barrier to progress in this regard. In practice, we need to upgrade all 1.3 billion Internet-connected computers to reform TCP, and most are running Microsoft Windows. In practice, this means that Microsoft controls the Internet. They’re resistant to making changes that might affect compatibility, so we’re in a bit of a quandary.

    For some of the history behind TCP fairness, see my blog post: http://bennett.com/blog/ind

  7. Bill Clay says:

    Thanks for your detailed and informative response, Richard. Interesting reference by Nagle, but I don’t see that it supports the assertion that ISPs must or should abort TCP sessions with RSTs to implement per-user bandwidth ceilings. May I reply point-by-point?

    * Ah, the usual issue, “Which layer is TCP?” How about you spot me session and I spot you transport? Sure, TCP/IP is not as neatly layered as we’d like. Fact is, it does both.

    * You’re obviously familiar with deeper protocol research than I and you take this point … up to a point. However, I’d say there’s a big difference between noticing and trying not to discard TCP ACKs (a transport function) versus deeply inspecting TCP headers and payload and generating RSTs (a session function).

    Another analogy: the difference between the PO weighing letters vs. steaming them open. This may even be apropos considering the majority of TCP ACKs in most of the high-bandwidth transfers I’ve seen are in the lightly loaded direction and are usually accompanied by little or no payload (but I work in a business environment; no BitTorrent or the like).

    * Of course operating multiple TCP sessions increases bandwidth available in the TCP context. However, near as I can tell, neither what you’ve written here and in previous posts on this subject nor Nagle’s article demonstrate that aborting TCP sessions is the best or only way to deal with the resulting congestion.

    Can you explain why just tallying bandwidth utilization per origin and destination IP address pair (surely less expensive than deep packet inspection) and randomly discarding packets of high-utilization origin/destination pairs — politely avoiding low-payload TCP ACKs whenever possible — cannot provide a fair, minimally-performance-imparing per-user bandwidth ceiling?

  8. Bill, I worked on ISO-OSI protocol design and IETF protocol design at the same time back in the mid-80s, and I feel pretty confident about the difference between transport and session protocols. In the IETF world, SIP is a session protocol, while TCP is simply a virtual circuit transport protocol. TCP is virtually identical to TP4, one of the OSI transport protocols.

    On your larger point, I agree Sadam Hussein wasn’t the only vile dictator in the world, and Bush could have invaded Zimbabwe just as easily. OK, scratch that.

    The task for the FCC is not to make rules requiring companies to do things the “best way”, where the government is judge of what’s best. It’s to assess whether a given approach is “reasonable” or not. The Comcast RST approach is reasonable, I would contend, because it achieves the goal of allocating active TCP sessions fairly. There are other ways to equalize bandwidth, some better and some worse, but RST is clearly in the ballpark as a technique to consider.

    It turns out that the company has now found a better way and will be transitioning to it. This better way, however, will reduce the amount of BW available for BitTorrent uploads in most cases. So I don’t know that it’s a win for the BT companies.

    When rhetoric collides with reality, weird things happen.

  9. Harold says:

    Richard wrote:
    “The task for the FCC is not to make rules requiring companies to do things the ”best way“, where the government is judge of what’s best. It’s to assess whether a given approach is ”reasonable“ or not. The Comcast RST approach is reasonable, I would contend, because it achieves the goal of allocating active TCP sessions fairly. There are other ways to equalize bandwidth, some better and some worse, but RST is clearly in the ballpark as a technique to consider.”

    And here we come to the substantive disagreement about the role of the FCC. LEaving aside whether it is the role of the FCC t make rules requiring companies to do things ‘the best way’ (which I recognize you consider impossible for an agency to do), there is (IMO at least) far more to the question of “reasonableness” than whether a system allocates TCP sessions fairly (and again, I am not convinced that denying p2p users any opportunity to us the application is a “fair” allocation of TCP. Quite the opposite, in fact. Thatb there are others ways that more fairly distribute access to capacity so that all users have at least some opportunity to use the applications and services of their choice is part of the reasonableness question.

  10. Well, the fact is that Comcast has never prevented P2P users the opportunity to use the system. I’ve been a Comcast customer for years, and I’ve used BitTorrent quite successfully before, during, and after the the most aggressive deployment of Sandvine on the network. I’ve seen the system slow down my BitTorrent seeds, but never to the point that seeding was impossible, and the whole time BitTorrent downloads have been faster on Comcast than on a DSL network.

    The tests done by Topolski and EFF were designed to fail because the swarms were too small and the duration was too short.

    So my point stands that the goal the Sandvine system aims to reach is a reasonable one, and Comcast’s only problems were failure to disclose and bad public relations.

  11. Bill Clay says:

    Harold’s response is surly more germane to the big picture than what follows, but I would still like to know Richard’s answer to my question.

    From Richard’s response, I guess I’ve tipped my hand. It’s true, I still believe that government can and should set uniform bounds on the behavior of oligopoly providers of critical public infrastructure (though I don’t remember veering off into foreign policy). However, I am a technician first and I salute good engineering supported by factual evidence.

    Looks like Richard trumps me in both protocol design props and blog zingers. But that’s all he gave me to salute, not technical rationale or fact. A couple observations:

    1. I stated that Richard has not demonstrated (here) that aborting TCP sessions is the best or only way to deal with IP network congestion. Richard responded, “The task for the FCC is not to make rules requiring companies to do things the ‘best way.’” Two quibbles: I raised a technical issue, not a policy one. And it’s bandwidth that must be fairly allocated to users, not TCP sessions. We could use the Internet well and intensively without recourse TCP; the bandwidth must still be fairly allocated. If we can’t agree on the unit of allocation, it’s hardly surprising we disagree on how to allocate it. Frame of reference problem again, but at a lower level.

    2. Richard gave us a judgment of what’s “reasonable” behavior for Comcast, but the only support he offered for that judgment are his credentials. I may be just a garden-variety technical practitioner with little background in protocol theory and design, but I still need more than credentials to accept such a claim. I ask no more of Richard than I would of a vendor trying to sell me their newest network gear. How about some data from tests by disinterested specialists? An IETF RFC? A peer-reviewed journal article or two? Even just an informal but technically supported explanation?

    While we’ve all contributed our share of colliding rhetoric, I haven’t noticed it suffer even a glancing blow with quantitative reality.

  12. Bill, I don’t believe TCP RSTs are the “best or only way” to control traffic on a DOCSIS network, and neither does anybody else. There was a time when it appeared that the most economical way to handle the traffic, pending an upgrade to DOCSIS 3.0, was a system that used them, but Comcast has now stumbled upon something that’s more efficient and less application-centric.

    So don’t ask me to support an argument I’m not making.

    If you want an RFC on traffic management that supplements the design defects of TCP, see John Nagle’s RFC 896: http://www.faqs.org/rfcs/rf

  13. Brett Glass says:

    [Chiming in a bit late in the thread] Bill, Barry, and Harold: As Richard and George have both tried to explain, a key problem with P2P is that it exploits weaknesses in the design of the TCP/IP protocol suite — weaknesses that the developers attempted to patch somewhat crudely (e.g. via Van Jacobson’s pacing algorithm) but never could really fix. Because P2P seizes bandwidth and priority, it actually makes the network “non-neutral” — and throttling it is an attempt to restore neutrality, not to break it.

    In my opinion, RST packets are a particularly elegant way to handle the throttling of P2P (or, at least, P2P applications that use TCP; not all do) for several reasons. Firstly, it is extremely efficient. If a TCP “socket,” or connection, is simply blocked, the two sides retry their transmissions many times before they give up. This keeps the network congested and also wastes the two sides’ time. RST packets tell the two parties, “This connection is over,” and they can get on with things. In particular, a BitTorrent client can turn to another source for the material.

    Secondly, the use of RST packets makes it possible for the network management system to be “passive” rather than creating a bottleneck. Instead of acting as a firewall (which delays traffic), it can be a passive “listener” on the network (it listens via a network hub to the traffic “going by”) until it decides to jump in and terminate a connection. This is EXTREMELY efficient and cost-effective, and guarantees that no other traffic is affected.

    Finally, the use of RSTs makes routers work more efficiently. Routers with stateful firewalls (the most effective at deflecting attacks from the Internet) maintain tables of connections which are passing through the router. A RST packet tells the router that it can free up the table entry for that connection, preventing overflows which can cause the router to malfunction.

    The use of RST packets has a long history in network management. For example, we have used RST packets for many years to protect dialup users’ privacy. When a user hangs up, and a subsequent caller “inherits” the same IP address, it is possible for the second caller to receive private information that was intended for the first. Sending RST packets for the first caller’s pending connections ensures that this does not happen. This practice is very pro-consumer and pro-privacy.

    Besides all of the above, there is another way in which P2P is very non-neutral: the way in which it shifts costs from content providers to ISPs. The implicit contract of the Internet is that everyone — content providers and users alike — pays for his or her connection to the backbone. But when content providers use P2P, they shift the cost of their backbone connection to the user’s ISP, multiplying it in the process. This is unfair and non-neutral. Thus, again, blocking P2P — just like the blocking of other abusive behavior — actually makes the network fair and neutral and is therefore reasonable.

  14. barry payne_economist says:

    TECHNICAL INTENTIONS AND PROMISES OF WHAT’S “FAIR” AND “GOOD BEHAVIOR” DO NOT CONSTITUTE COMPETITION

    When a network provider’s contract restricts, for example, the use of a connection as a server, that indicates market power in the absence of competitive alternatives.

    The provider is attempting to tie the bandwidth to a particular use through non-neutral restrictions at the connection level.

    In a competitive market for bandwidth as a commodity, anyone not happy with this arrangement would simply find a provider who offers bandwidth free of such restrictions.

    An analogy would be an electric company which prevents the use of electric hair dryers as “kilowatt hogs” rather than placing uniform prices and TOS on the use of kilowatts and kilowatt-hours for all consumers (a “neutral” condition imposed in the absence of competition for distributional electric grids, which are a necessary monopoly in order to achieve efficiency)

    When some of today’s electric consumers set up windmills and solar devices to sell electricity back into the grid, that’s similar to a P2P user setting up “servers” on end use connections.

    The electric utility doesn’t claim it’s “invaded” by windmills and solar panels, at least not any more – it simply treats it as uniform flows of kilowatts and kilwatt-hours when the meter is run backwards, like corresponding uniform bandwidth in terms of Mbs or GB volume.

    To claim that P2P uses in excess of the “up to” amount of bandwidth sold is equivalent to an electric company complaining that customers are exceeding the units of kilowatts sold and purchased (even if such use occurs collectively in “swarm” fashion, as as from air conditioners on a hot day).

    When that happens, electric companies know that either the meter is broken or someone is stealing the electricity without paying for it and disconnect customers for the same. They don’t look out over the network grid and exclaim, “all this congestion is caused by an invasion of illegal electric hair dryer use and it must be treated separately”, i.e. in a non-neutral way, from all other electric use, i.e. the equivalent of broadband “content”.

    Further, to claim that constant, continuous use of an “up to” connection somehow exceeds its cost or shifts cost errs on several points.

    Under competition, there would be alternatives to whatever this means, i.e., somewhere would be a provider that offers dedicated bandwidth in “up to” limits, which means for example, that a 4Mbs connection could indeed produce over a 1,000 GBs/month without the provider claiming that such use “exceeds” 4Mbs and 1,000 GBs and therefore “shifts costs” and “violates” the contract.

    No it doesn’t. And that’s the problem. Network providers are labeling such use as “bandwidth hogs” and “all you can eat consumers” and “contract violaters” in order to engage in the non-neutral, discriminatory control of content, when all they’d have to do in the so-called “competitive” market in which they claim to exist, is simply provide X service for Y price under Z conditions, which would include, for example, dedicated bandwith connections that really are accessible 24/7 for X Mbs limits as sold.

    In contrast, providers revert to phony threats of “Gegabytes of mass destruction” mode when faced with these ordinary questions of business practices in competitive markets.

    If P2P data packets are per se somehow technically dangerous or illegal in some way that other packets are not at the individual data packet level, then by all means control the problem, for example, in the same neutral way networks and websites are protected from hacker attacks.

    Meanwhile, what’s all the hand waving about? Is the bandwidth meter broken? Is the GB meter broken? Why are they secret? Can’t the individual violaters be disconnected? What does content have to do with the use of more or less Mps of bandwidth or volume of GBs, independent of how it uses TCP, and why is this fundamentally confused with congestion that results form all sources of peak use due to oversold (undedicated) capacity?

    The answer is, most broadband bandwidth in most areas is not sold as a uniform commodity under competitive conditions and requires net neutrality, like an electric grid, to result in outcomes similar to competition.

    Comcast just filed in Docket 07-52, on April 9, a commment based on a press release by Pando Networks, which collaborates with among others, Verizon, to “improve” P2P use on broadband networks.

    Comcast states the press release “provides further proof that policymakers have been right to rely on marketplace forces, rather than government regulation, to govern the evolution of Internet services.”

    No. What the press release provides is more evidence that facility-based providers in the absence of effective competition are well on their way to undermining the thriving competition among content producers and consumers.

  15. Brett Glass says:

    Barry, you’re making an absurd “straw man” argument. For example, your claim that

    “When a network provider’s contract restricts, for example, the use of a connection as a server, that indicates market power in the absence of competitive alternatives”

    is absurd on its face. If this were true, a rental car contract that said I couldn’t use the rental car in a stock car race — or as a bulldozer — would prove that there was no competition among rental car companies.

    Acceptable use policies are as old as the Internet, and serve valuable technical and economic purposes.

    You then write:

    “The provider is attempting to tie the bandwidth to a particular use through non-neutral restrictions at the connection level.”

    Again, untrue. We believe that users should have access to any and all lawful content and services. However, they may not abuse the network while obtaining them. And they don’t need to, since any content or service which can be provided via P2P can also be provided via non-abusive means.

    In fact, because P2P is non-neutral (besides being harmful to the network and to our quality of service), we are actually enforcing neutrality by insisting that it not be used on those connections.

    “In a competitive market for bandwidth as a commodity, anyone not happy with this arrangement would simply find a provider who offers bandwidth free of such restrictions.”

    In fact, we do have other levels of service — business class service — in which we sell bandwidth free of such restrictions. But it costs much more. It has to. Since our backbone bandwidth costs us $100 or more per Mbps per month, we have to charge at least that much. You seem to be asking the Federal government to mandate that we eliminate our economical consumer broadband options and charge consumers $125 or more per month for a 1 Mbps connection (which would actually be a fair price, given our costs, if we had to assume that these connections could be pushed to the limit by P2P all day and all night). Such regulation would be very harmful to consumers. And ironically, such prices might make users think twice about giving that expensive bandwidth away to companies like Vuze without asking for compensation.

  16. Bill Clay says:

    I hoped this thread, or at least its technical portion, had gracefully petered out. However, Brett’s last two replies show that I obscured my key concerns by straying into the technical weeds.

    Can we agree on the boundaries of the playing field? I think we all want some form Internet “network neutrality.” If not, the discussion is pure policy. I’m not foolish enough to think I can win any policy points in this crowd.

    We can doubtless disagree on the exact definition of “network neutrality,” but perhaps we can converge on its broad objective: preserving the Internet’s historic and open-ended flexibility — its fundamental source of power, growth, and promise as a democratic communication medium. We’re after a person-to-person medium that is more like the telephone network than like broadcasting. If we’re together this far, then maybe we can still have a useful discussion.

    Brett shows that when network service providers track and control individual TCP sessions, they can implement efficient and elegant bandwidth-allocation mechanisms at the network interior. But these mechanisms seem focused on a particular class of modern applications: P2P content distribution. My fear of such an approach is two-fold:

    1. Any such “governor” is likely to disable or severely handicap other TCP applications that are not unfair bandwidth consumers and that are not the intended target of the governor.

    2. To the extent such governors successfully identify specific applications, content, or information sources, they provide nearly irresistible commercial incentives for network service providers to become non-neutral.

    Now we get back to TCP RSTs and protocol layers. Richard contends that TCP is not a session-layer protocol. Given his OSI and IETF background, I must concede when he says so that, in the SAT sense of “pick the best match,” TCP best matches an ISO transport-layer protocol. But it’s still the only Internet protocol mechanism by which most communicating entities (TCP-sockets-based applications) initiate and manage their communication with other entities — and Richard and Brett both unashamedly refer to “TCP sessions.”

    While modern applications engineered to coexist well with the US’s DOCSIS cable networks may “get on with things” just fine after a TCP socket is reset, there are countless legacy TCP applications that do not. Even new TCP applications engineered for more modern portions of the Internet may not gracefully recover from such an event. (And the last two advantages that Brett cites for RSTs — firewall/router efficiency and privacy when session IDs are recycled — seem irrelevant to the network interior. Both are generally network-edge issues managed by the end user.)

    Unless bandwidth management systems are 100% accurate in identifying applications, then any manipulation they perform at the session level is likely to have unforeseen and undesirable impacts on applications and information sources other than the targeted ones. It’s hard to believe that such mechanisms will never mis-identify a flow, given the limitless variations of traffic aggregation, address and port translation, protocol tunneling, etc. that occur at the network edge.

    When flows are mis-identified, bandwidth governors will manifest the first flaw above. When flow identifications are correct, they may manifest the second one. Thus, when ISPs manipulate data flows at the session level, at least one of the flaws above seems a logical certainty. Either one compromises our shared objective of network neutrality, whatever your formal definition of that term may be.

    Therefore, ISPs should implement such mechanisms only if (a) disinterested and widely accepted engineering analysis confirms that they are superior by a wide margin to any other available bandwidth allocation mechanism and (2) contractual and/or regulatory mechanisms provide for rapid and effective remediation of the first flaw and expressly prohibit instances of the second.

  17. Bill, your last comment is simply academic, given that Comcast has already announced they’re dumping the RST system in favor of protocol-agnostic traffic shaping.

  18. barry payne_economist says:

    Bill Clay, that was an excellent breakdown of the intersection between content on the top end as provided from the bottom up on the technical end, and it implies more about net neutrality than indicated in the introduction.

    One plausible interpretation seems to suggest something like this: TCP uses certain amounts of bandwidth, whether by route or session, as well as by application.

    In the presence of spare bandwidth capacity with no congestion problems, existing bandwidth allocation is “governed” implictly and indirectly through a legacy accumulation of direct limits placed on TCP and corresponding applications with loaded bandwidth over time.

    When something like P2P is considered a congestion problem, attempts to govern the bandwith (or reallocate how the bandwidth is used – doesn’t sound like the same thing?) can result, as you point out, in mis-identified flows for which bandwidth governors will manifest by overeaching and restricting unintended applications, or conversely, when identified correctly, encourage incentives to manipulate the same beyond the original objective.

    It sounds somewhat like the Type I/Type II error problem of false positives and negatives with a twist. Even when the error is eliminated via accurate identification, the potential conflict between knowing exactly what the content is and favoring some content over others remains.

    In any case, whatever comes out of whatever is governed or allocated emerges in certain locations under certain conditions on networks as “bandwidth” in the conventional uniform measure of Mbs.

    Articulated positions like this go far in informing issues associated with net neutrality and hopefully will continue.

  19. Robb Topolski says:

    Nice thread. Too bad that I saw it so late.

    Richard — What is Comcast doing? We still don’t know whether they are ditching an unacceptable method for another unacceptable method.

    Richard — I don’t lack the expertise, if I don’t have it, I can buy it. I lack the physical access. Remember that Sandvine is not in my segment, it’s in Troutdale — many miles away from me. Get a map and locate Hillsboro and Troutdale, Oregon.

    Richard — Why are you talking about the Nagle algorithm? It’s in RFC 1122 (as a SHOULD). But, again, its in the purview of the endpoints (e.g. the hosts) whether to implement and, if implemented, MUST under the control of the application. The ISP should not have control here. Please explain this to me so that I know your perspective.

    Brett — repeatedly dropping packets of a particular application or protocol is a red herring. It’s still prohibited discrimination under the policy statement. Not to mention that your explanation that doing to adds to congestion is wrong: doing so replaces streaming data with small probes that back off in a multiplicative * doubling * random = intervals way. Your point is both wrong and moot.

    Barry — Your quote — “The essential question is, What bandwidth is User A ‘overusing’ through P2P in the first place – its own, that of User B, both A and B, or none?” — is dead on. Network Management suddenly means keeping the network in a constant state of congestion. It used to mean keeping congestion as rare as possible. Most of this “fairness” garbage relates to behavior that actually only happens during congestion. And Ou+Bennett and some in the IETF quoting Bob Briscoe love to talk about P2P applications opening and sending streams of data over hundreds of connections — which doesn’t happen.

    (BTW, Briscoe and I have been exchanging his email — he admits that his “problem statement” overshoots the current situation because he wants to avoid a situation that could occur should residential internet users get much larger upload pipe. I’ll give him that, but his statements are not presented in future terms either by him or those who quote him.)

  • Connect With Us

    Follow Wetmachine on Twitter!

Username
Password

If you do not have an account: Register