PARC (Palo Alto Research Center), previously known as Xerox PARC, has been responsible for some of the greatest innovations in computing, including the graphical user interface and laser printing. PARC was spun out of Xerox in 2001 as an independent subsidiary and now is working on projects like content-centric networking. To find out the latest developments at PARC, InfoWorld Editor at Large Paul Krill talked to PARC CEO Mark Bernstein late last week during the Microsoft Global High-Tech Summit 2010 event in Santa Clara, Calif.
InfoWorld: PARC is known for innovations like the graphical user interface. What else came out of PARC?
Bernstein: Distributed computing at large, including garbage collection, mail servers, file servers, pull-down menus, Ethernet. Laser printing, which is most important to Xerox.
[ Also on InfoWorld: See "10 fool-proof predictions for the Internet in 2020" in InfoWorld for more on content-centric networking. ]
InfoWorld: What great innovations are going on there now?
Bernstein: There are two big bets that we are investing in for the future, which are separate from the things that we have to engage customers with right now. The one you'd probably be most interested in is content-centric networking. Van Jacobson, who helped fix TCP/IP, was the chief science officer at Cisco. He came to PARC about three years ago with a vision for overhauling how the Internet operates and moving it from a point-to-point plumbing problem to a more distributed content model.
InfoWorld: Content-centric networking sounds like the semantic Web. Is that what it's about?
Bernstein: No. The focus is on the actual content that is shipped around the network. If you look at the old model of how the Internet originally operated, it was to connect a person at Point A with a resource at Point B, so a researcher at UCLA could connect with a mainframe in Princeton. And so if you think about what's actually happened to the Internet, it's really about content that's being shipped around. And particularly now with video and data, images, music, whatever, the ability for the Internet to be able to handle multiple shipments of the same data is a real drag on the performance of the network. So the intention here is to be able to have content available in the network with a unique identifier and have people be able to access content wherever it is as opposed to going to a specific point and node on the network to acquire it.
Let's say you've got 1,000 people who all get a link from their friend that says, "You've got to check this out at YouTube." And you've got 1,000 people that are all hitting YouTube at the same time for the same content, and that's being shipped 1,000 different times. So there's a notion that there's an intelligence in the network that can -- and that's part of what content-centric networking is about -- that allows the most accessible route to the content to be achieved. And the goal there is to reduce all the overhead that right now is in the headers of messages and content flying around the network, to be able to strip all that out of the packets and really be able to allow more of the content to actually be delivered as opposed to all the overhead. So it's reducing the transmission overhead in the Web. It's being able to allow higher performance by more closely associating the need of the individual user with specific content. You're not going back to YouTube for it, you're going to where it resides cached on the network.
InfoWorld: When are we going to see some of that technology on the market, or is some already there?