The threat to universal Internet connectivity

Can -- and should -- blockers, spammers, and domain name overlords be stopped?

See correction at end of article

On Sept. 15, 2003, engineers at Verisign pressed a button and launched SiteFinder, a service that directed Web users who mistyped .com or .net Web addresses to a VeriSign search engine rather than the usual “page not found” error message.

Within minutes, the new code rippled through the worldwide .com and .net domain name routing infrastructure, which VeriSign controls. Within hours, according to the Internet Corporation for Assigned Names and Numbers (ICANN), it had begun interfering with and in some cases rendering inoperable many spam filters, e-mail applications, and sequenced lookup services designed to expect the prior error message protocol.

Within days, a public outcry from the technical community and ISPs — who had scrambled to write work-arounds to the new code — forced ICANN to demand the withdrawal of the new service, claiming it “substantially interfered with some number of existing services” and “considerably weakened the stability of the Internet.” On the night of Oct. 4, VeriSign pulled the plug.

Future historians may call this episode the Pearl Harbor of the Web. For many, it was the strongest indication yet of a war that’s begun over the Internet’s future — a war that pits innovation against stability and security, commercial interests against technical communities and regulatory bodies, and proprietary initiatives against consensus protocols.

Site Finder exposed a key battleground: Should more intelligence be added to the Internet’s core to bolster performance and security, or will adding intelligence to the core clog up the Web, limiting innovation and undermining the so-called universal end-to-end connectivity principle?

“The commercialization of aspects of the network has led to forces like the VeriSign one, and of course, I’m not happy at all about that,” says Vint Cerf, senior vice president of technology strategy at MCI, and widely known as one of the fathers of the Internet for his role in designing the TCP/IP protocols. “They’ve gone in and changed the core functionality of the Net, and for most of us, that’s just unacceptable.”

Counters VeriSign’s CEO Stratton Sclavos: “People who believe that change at the core is not a good thing have good intentions … but perhaps have lost their way about the realities of what this network is today and what it must become.”

Goodbye to the end-to-end principle?

The VeriSign development came on the heels of other incidents that seemed to indicate the Internet’s vaunted end-to-end principle’s days were numbered. In August, some ISPs blocked Port 135 in response to warnings that hackers could use it to exploit a vulnerability in Microsoft Windows’ RPC (Remote Procedure Call) protocol. Although many enterprises and end-users had already blocked Port 135, some were still using it — for example, to connect Microsoft Outlook to Exchange Server — and found themselves cut off.

Other incidents, according to Fred Baker, a Cisco fellow and former chairman of the Internet Engineering Task Force, have included third-world countries outlawing VoIP (voice over IP) traffic for economic reasons and China Telecom disabling DNS access to specific end-users’ servers for political purposes. All these incidents violate the end-to-end principle, in which end-users and applications know what’s happening throughout the network and are able to leverage known end-to-end protocols.

46FEtrouble-in.gif
Click for larger view.

According to Cerf, the balkanization of the Internet and the erosion of end-to-end visibility has been under way for a decade, thanks to escalating security threats and rapid Internet growth. Cerf points to the wholesale deployment of increasingly intelligent devices such as stateful firewalls and NAT boxes that increasingly block or translate network traffic under certain conditions, stymieing the end-to-end principle. NAT boxes allow ISPs and enterprises to work around the growing scarcity of IP addresses by letting many computers share the same address.

“Architectural purists like me consider NAT boxes to be a form of abomination,” Cerf explains. “They make it look [to end-users] like the network is still functioning on an end-to-end basis, but what they don’t know is that there’s this guy in the middle madly translating.” 

NATs and the middleware written to accommodate them, Baker adds, make it difficult for new applications to use the Net without being specifically tuned for a given local environment.

On the security side, ISPs have increasingly tried to stop malicious traffic as far upstream as possible from their enterprise customers, which means putting intelligent filtering capabilities into the core, explains Rob Clyde, Symantec’s CTO.

From ingress and egress, filtering to block ICMP (Internet Control Message Protocol) Echo Reply attacks, to router ACLs (Access Control Lists) for foiling DoS (denial of service) attacks, to simply temporarily blocking certain addresses or packets with certain characteristics, ISPs have gotten a lot more aggressive, Clyde says.

“And blocking good traffic as well as bad,” Clyde adds, is “always a risk an ISP runs. There’s a constant trade-off.”

Innovation vs. performance

One argument against adding intelligence to the core is that Internet traffic is growing at a faster rate than Moore’s Law predicts, and so additional processing equipment in the core will not be capable of keeping up with the load over time.

“The number of CPU cycles that the core router has to deal with the packets is decreasing,” claims Guy Almes, chief engineer at Internet2, a university consortium working to deploy next-generation network technologies. “Those routers are improving exponentially, but they’re facing demands that are also growing exponentially, but perhaps at a higher rate.”

But most opponents of adding intelligence to the core say the real issue is the dampening of Internet innovation, because new applications will be limited to fewer protocols and forced to conform to more barriers and gateways. “We need to be able to freely deploy new applications,” Cisco’s Baker says, noting that once upon a time the World Wide Web was a new Internet application. “If we were to try to deploy it today, we would not be able to do so — I’d have to convince IT to allow it through the firewall — and there has to be a business reason to do that.”

And the challenge will only get trickier as demands for new functionality such as VoIP and video applications drive vendors and ISPs to deploy more QoS (quality of service) and control capabilities into the core.

“The intent [of the founders of the Internet] was to observe very clearly the layered architecture, to minimize the amount of intelligence at the core,” MCI’s Cerf says, nonetheless acknowledging there are multiple ways to accomplish the QoS objective. “Either we’ll put in two classes in the core of the network and treat them separately, or take that money and spend it on building a higher capacity network. We have a fight every other Tuesday at MCI over this.”

“It’s a ticklish point. You don’t want to do something that turns [the Internet] back into a circuit switched system,” says Steve Crocker, chairman of ICANN’s Security and Stability Advisory Committee.

Crocker doesn’t want innovation to revert to a model similar to the old Bell System, which built intelligence into central switches that were highly reliable and stable but took decades to deploy and upgrade.

“The thing that has made the Net most successful is keeping the core of it as simple as possible and keeping the innovation in the edges,” Crocker says. “The standard wisdom is to avoid putting anything into the center that doesn’t have to be there. Less is better.”

Nonsense, VeriSign’s Sclavos says, arguing that rapidly growing end-user demands can best be handled by putting more intelligence into the core, which includes the DNS.

“We’re seeing a dramatic increase in consumer usage,” Sclavos says. “When we bought Network Solutions, we were handling a billion DNS queries a day. Now we’re handling 10 billion, and it’s doubling about every 18 months.”

Sclavos says pushing intelligence to the edge made sense in the early days when the Internet was an academic network, but because of the network’s growth, it no longer does.

“More intelligence has to be in the network to provide better routing and better security,” Sclavos says. “And it has to be in the core routing systems if you’re going to get latencies that make sense for people.

“You want the core to be where the complexity is, so it’s hidden from the user,” Sclavos adds. “We’ve now got so many touch points at the end that you want the edge to be simple so that it can scale.”

Sclavos strongly implies that the existing system of technical standards committees has broken down and that individual players and new entities must take the lead in defining and investing in the next generation of Internet capabilities, including an intelligent core.

“The Internet is growing faster than maybe the standards [bodies] can keep up,” Sclavos says. “We’re going to get back to the world of the 1980s and early 1990s, migrate back toward a healthy tension between vendor-driven standards and community ones.”

Standards committees to the rescue?

Whether the Internet technical community can succeed in preserving decentralized innovation based on end-to-to-end protocols may hinge on the fate of standards such as IPv6, an entirely new Internet addressing architecture designed to replace IPv4, the current worldwide standard.

A poster child for next-generation Internet protocols, IPv6 would presumably render NATs unnecessary, providing a much broader pool of available IP addresses and thereby unifying balkanized user communities on a single platform. Together with a related protocol called DNSSec (DNS Security), it would facilitate additional security capabilities, such as encryption and authentication, that anyone could leverage.

The question is whether these architectural protocols can move from committee approval to broad adoption before proprietary solutions not designed for end-to-end adoption take hold.

“Deployment is the hard part, … the actual adoption,” ICANN’s Crocker explains. “What happens in the Internet is the protocols get decided on in a consensus process, and then the adoption of those protocols is done by individuals or individual organizations. … Things either develop a following, or they don’t.”

Crocker notes that there’s a natural resistance to adopting protocols such as IPv6 that add overhead cost to the system. “It adds both computational cost and storage,” he explains. “The data takes more space in the registries, and the generating and checking of the signatures takes computation time, and the responses are bigger and take more transmission.”

Although IPv6 has gotten early traction in educational institutions and in countries such as China that have severe address shortages under the IPv4 system, it faces significant hurdles in the United States.

“That whole product upgrade cycle is likely to be very complex. Everything has to be changed. It will probably take government driving IPv6,” Symantec’s Clyde says. “I don’t think industry alone feels any overwhelming compelling need to do it.” 

In this context, VeriSign’s Sclavos, who also supports IPv6 deployment, is largely unapologetic about VeriSign’s rebuffed attempt at unilateral innovation with the Site Finder service.

“The Internet is the only place in the world where we wait for people to knock down our doors before we take action,” Sclavos says, arguing that a faster adoption mechanism is needed so the Internet can meet impending needs and threats. He also claims that VeriSign has been a reliable steward of the DNS infrastructure, investing more than $100 million in the system, even during a down economy, and delivering 100 percent availability for six years.

“[VeriSign’s] been very vocal about saying that it’s important to be able to innovate at the core,” ICANN’s Crocker says. “And nobody has convinced them that what they did was wrong. Anyone with an engineering background looks at the idea of hastily fielded changes put in and says this is not a good thing. Here’s a major player that took action on its own. Is that the way we want to do things?”

Sclavos says that if VeriSign had the launch to do over, “we would come out much earlier with broad-based information about the changes” the service would impart.

“Next time, we would create a best-practices model about how we would roll out new services that are more complete,” Sclavos says. “There are some fringe things that broke, and we probably could have been better about getting the news out earlier.”

As ICANN’s Crocker notes, one thing’s for sure: The Site Finder episode “got a lot more people involved in these questions than had been.”

Correction

In this article, the original version of the chart "Internet Traffic Climbing Fast" incorrectly listed totals of daily queries to the Internet's DNS from 1994 to 2003. The chart has been corrected.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies