Cable / Telecom News

CTS 2008. Net neutrality proponents can’t have it both ways


TORONTO – Will we have smart Internet networks, or big fat dumb pipes? If net neutrality proponents win the debate, we’ll wind up with big fat dumb pipes that’ll be far more congested than they are now. That’s the message three of the four panelists drove home at Wednesday’s net neutrality panel at the 2008 Canadian Telecom Summit.

Net neutrality, although it has different meanings, is essentially about equal access to the Internet. Proponents fear that broadband carriers will use their market power to control activity online and determine what content gets to the consumer first and fastest.

The explosive growth of bandwidth-hungry peer-to-peer file sharing, and video streaming is posing not only problems with congestion but also pricing issues for the broadband carriers.

Eric Loeb, AT&T’s vice president of international, external and regulatory affairs, posed a number of questions that cut at the heart of the net neutrality debate:

“How is Internet going to be engineered – as a smart network or the dumb pipe? And, who is going to make those engineering decisions – by the private sector or the government? And, how is this network going to be financed – by a variety of parties that all benefit together from the Internet or entirely by the end users?’’

“What is the problem that’s creating all this talk about net neutrality and does the problem need to be solved with new regulation or can traditional regulatory tools address the concern,’’ he asked.

Then, he answered: “There is an actual problem developing for consumers on the Internet and therefore a real issue for policymakers. But, the problem is not the fear of monopolistic Internet, and it’s certainly not going to be solved by regulatory intervention that attempts to predict future problems,’’ he said.

“The most immediate challenge consumers collectively face is how can the Internet most effectively evolve and grow amid the unprecedented new wave of broadband traffic.’’

Net neutrality proponents can’t have it both ways, he said. They can’t have increasingly robust broadband networks to handle new applications if they aren’t willing to pay for their proportional share of the cost. “Under such a policy regime, there’s a very real chance, you’ll end up with more congested networks than less,’’ he said.

And, simply increasing capacity isn’t a solution to the congestion problem, the panelists agreed.

“Whatever the bandwidth hog is doing today, the mainstream user will be doing tomorrow,’’ warned David Caputo, co-founder of Waterloo, Ontario-based Sandvine, whose technology is at the centre of the maelstrom over Comcast’s peer-to-peer (P2P) bandwidth-throttling practices that’s ignited the recent U.S. net neutrality debate.

“It really comes down to concepts of fairness in using this shared resource, and if you keep the subscriber front-and-centre and improving the quality of the experience then I think that it will become quite laughable in the next few years that anyone ever maintained that all packets were created equal.’’

Sandvine undertakes a yearly survey called Broadband Phenomenon that’s based on packets, as opposed to questionnaires. When the company analyzed what people are actually using on the downstream of their Internet connection, the company found that P2P filing-sharing traffic is 35.6%.

Web browsing, at 32%, has made a “miraculous comeback,’’ he said, in terms of bandwidth consumption, driven by the popularity of high-traffic sites like Facebook and MySpace.

“Streaming has been the real new racehorse that’s come out at almost 18% of downstream traffic,’’ he said, adding that most people mistakenly believe that YouTube is the No. 1 streaming web site when in reality, MySpace is the leader. That’s because the stream on MySpace starts immediately. With YouTube, the consumer must click on a video to activate it first. “It’s an active versus a passive thing,’’ he said. “When something is passive and allowed to consume data, it consumes quite a bit of bandwidth.’’

Newsgroups are at 6.5%, while VOIP represents 0.3% of downstream traffic.

On the upstream side, P2P filing sharing is “far and away’’ the number one consumer activity, consistently among all networks, at 75% , he said.

Caputo also gave a quick tutorial on bandwidth, which has three dimensions, of which everyone only ever talks about one, speed. The other two are latency and jitter, the predictability of that latency. “VOIP telephony is absolutely latency and jitter sensitive, and when you see P2P, a big bulky transfer which adds latency and jitter to the network, you can see why folks feel they have to manage their networks,’’ he said.

This theme of developing smart networks and being allowed to price accordingly to build them was a recurrent theme that worries network operators because of the expected explosive growth from bandwidth-hungry consumers.

“Estimates are that the backbone traffic is doubling every 12 to 15 months. This is due in part to an overall increase in the number of end users connected to the Internet, but also it’s a factor of more bandwidth intensive applications, particularly video on the Internet,’’ said Loeb.

That’s just the tip of the iceberg.

“Why? Consider this: Downloading a high definition movie takes more bandwidth than downloading 35,000 web pages, or 2,300 songs over iTunes. A 30-minute HD video takes up the same network capacity as more than 1 million one-minute voice calls. YouTube consumes more bandwidth than the (entire) Internet did in 2000.’’

However, not all applications are equally sensitive to network congestion, he said. The average e-mail is much less sensitive to these network management differentiators than is real-time HD video, multi-player gaming or life-saving medicine applications.

Loeb also said that one-size fits-all pricing can be bad for consumers. A consumer who is engaged in traditional e-mail or Web surfing is not consuming anywhere near the network capacity of their neighbour watching videos on the Internet or sharing P2P files. “To avoid network congestion, it’s not enough just to add dumb capacity. And, if it were, it would be very, very expensive,’’ he said.

“The free lunch that heavy Internet users would be paid for by the light Internet users and that doesn’t strike us as being particularly neutral, particularly if the price increases would also increase the digital divide with low-income consumers and this is where we think the net neutrality debate and the calls for prescriptive regulation gets dangerous,” added Loeb.

“We think there’s a better way for the Internet community, which is to let the networks that form the Internet to continue to evolve, as they have done for four decades, to actually enable these new applications and maintain the freedom to explore intelligent-management designs and to test pricing mechanisms that can help better finance this growth.”

He continued, adding policymakers should deal with factual problems rather than guessing at speculative problems and speculative solutions. “We should all be skeptical about the closed regulation that picks winners among industry participants, that freezes network-design evolution and can stifle incentives to make the further needed network investments,’’ he said.

Mike Lee, Rogers’s chief strategy officer, took the opportunity to clarify what Rogers doesn’t do on its network given the “speculation and outright name-calling” the company’s drawn. Lee said Rogers doesn’t block traffic, impact encrypted traffic, or manage downstream usage. “We don’t make any judgments with regard to the legality, the profitability, mortality, or ethics of these packets. There is no judgment made. There is no inspection made of the contents of the packets. The packets are just generic packets to us,’’ he said.

The panel’s lone participant who wasn’t affiliated with a network carrier, Andrew Clement, a University of Toronto professor, took exception to comments made earlier at the conference by Rogers regulatory chief Ken Engelhart that net neutrality was a “fictitious’’ debate

In his presentation, he didn’t address what the CRTC should or shouldn’t regulate. Instead, he focused his address on the network operator’s relationship with its subscriber, stressing it should be a more open, trusting one. He also posed the question, whether network operators warrant the public’s trust, noting that there are indeed grounds for suspicion. He cited the oft-used example of Telus blocking certain web sites during its labour disruption of 2005.