One side effect of the FCC's recent move against Comcast's P2P "delaying" technology has been to make discussions about the dark art of network management even more pressing (and they were pretty pressing before). If Comcast can't use TCP reset packets to limit the number of BitTorrent connections a client can spawn, what legitimate techniques can ISPs use to deal with congestion ? Google's Vint Cerf, one of the grandfathers of the Internet, today weighed in with his answer: transmission rate caps.
Cerf, writing on the company's official policy blog, is down on usage-based billing, an idea which has been gaining traction among some ISPs (Bell Canada, Time Warner, and others are experimenting with it). But billing by the bit provides a strong disincentive to use the Internet for "non-essential" services and could cripple sites like YouTube, for instance, or the broader shift of video content onto sites like Hulu. It would certainly reduce overall traffic, but at the cost of discouraging Internet use—potentially a huge price to pay when you consider the innovations wrought by the Internet.
Cerf calls this "a kind of volume cap, which I do not find to be a very useful practice." He suggests instead a rate cap where users can "purchase access to the Internet at a given minimum data rate and be free to transfer data at at least up to that rate in any way they wish."
Quick show of hands? How many Internet users already think this is what they're paying for?
The return of the reserved pipe?
They aren't, of course, as consumer-level, $40-a-month broadband doesn't come with Service Level Agreements (SLAs) or other guarantees. Cerf's model would have ISPs guarantee a base level of bandwidth, with more made available only if network conditions allow. Users would be free to do whatever they like with their bandwidth. ISPs would also know exactly the minimum amount of bandwidth needed to serve customers and could plan accordingly, though this would force cable operators to offer huge upgrades over current practice, which involves sharing low-bandwidth upload links with entire neighborhoods.
Cerf also indicated that Comcast's move to protocol-agnostic management techniques is a step in the right direction, and he revealed that he has been in contact with Comcast engineers as the company's trial deployments take place. In a statement that will sound like music to the ears of Comcast executives, Cerf also said that "the real question for today's broadband networks is not whether they need to be managed, but rather how."
That point was also made on Friday by Kurt Dobbins at Arbor Networks, a provider of deep-packet inspection gear used for some kinds of network management. In a blog posting in the wake of the FCC decision, Dobbins blasted the "myths" of the net neutrality debate and said that management of some sort was an absolute necessity.
"Unmanaged networks result in serious degradation of service availability and quality for all users," he wrote. "It will also means that customers will be paying more for less, as providers are forced to continually build out their networks to stay ahead of the massive bandwidth consumption growth."
Exaflood redux
This sort of rhetoric fits in perfectly with recent talk of an "exaflood" of bandwidth, as though using the Internet more is something to be feared and possibly discouraged. As we've noted when discussing the concept, though, the Internet core has plenty of capacity and is in no danger of being overwhelmed; the problem, especially in the US, is in last-mile links.
In South Korea and Japan, 100Mbps fiber links to the home are common, rubbishing the idea that US ISPs just can't give Americans the insane speeds they seem to want. They certainly could; it's just that no one but Verizon has been willing to bite the bullet and pay for fiber to the home. Such $20 billion projects aren't good for short-term profits (though FiOS has made Verizon the only real forward-looking telco).
ISPs in favor of throttling and other controls generally argue that they are in danger of being "overwhelmed," which again isn't a necessary condition but the result of business decisions (offering 400 homes one 12Mbps upload pipe, for instance, was never going to deliver really spectacular service). While they can't truly be "overwhelmed" (cable modems and DSLAMs cap upload and download speeds based on how much a user pays per month, so the maximum data rate is well known), ISPs don't want to pay for huge amounts of peak capacity that will sit unused much of the time. ISPs oversold service on the premise that they operate like roads and most cars wouldn't be on the highway at once.
As unattended apps like P2P and network backup utilities tie up a portion of bandwidth for ever longer periods of time, the old solutions aren't working as well and congestion is one result. Cerf's idea would take us back to the old "circuit-switched" days in the sense that each Internet user would instead get a guaranteed line with a minimum guaranteed rate at all times. This would answer consumer complaints about "not getting what I paid for," but would cost ISPs more cash.
No comments:
Post a Comment