Although Google's non-search engine products, such as its Google Apps Web hosted collaboration and communication software suite, get much attention, search technology and its companion ad system and network still generate most of the company's revenue.
At last week's Web 2.0 Summit, IDG News Service caught up with Marissa Mayer, Google's vice president of Search Products & User Experience, to chat about video search, semantic versus keyword search, Google's universal search effort, and the challenge of indexing the "deep Web."
What follows is an edited transcript of the interview:
IDGNS: There are different technology approaches to video search. Blinkx, for example, maintains it does it better than Google because it indexes the text of what is said in videos with speech recognition technology. Where is Google with video search today?
Mayer: Google Video has had an interesting evolution. When we first launched it, it was based on closed captions, so literally a transcription of the program, but interestingly, you couldn't play video. So we changed it so that you could play video, and now we're searching the meta content. That said, one of the future elements of what's likely to happen in search is around speech recognition.
You may have heard about our [directory assistance] 1-800-GOOG-411 service. Whether or not free-411 is a profitable business unto itself is yet to be seen. I myself am somewhat skeptical. The reason we really did it is because we need to build a great speech-to-text model ... that we can use for all kinds of different things, including video search.
The speech recognition experts that we have say: If you want us to build a really robust speech model, we need a lot of phonemes, which is a syllable as spoken by a particular voice with a particular intonation. So we need a lot of people talking, saying things so that we can ultimately train off of that. ... So 1-800-GOOG-411 is about that: Getting a bunch of different speech samples so that when you call up or we're trying to get the voice out of video, we can do it with high accuracy.
IDGNS: What about non-speech content in videos -- the action in the clip?
Mayer: That's going to be particularly hard, given that most of Google's approaches are based on text right now. So we really do need the text, which is why our inclination is to build a great speech-to-text model and pull the text out.... That said, there are a lot instances of humor, context, things that happen in frame that don't necessarily have words, but for that we're going to have to rely on the community to do things like tagging.
There is some very early research happening around recognizing faces in videos, recognizing particular objects, understanding that hey, there's a ball in the frame right now, but it's very early and not at all ready to be deployed in a commercial application.
IDGNS: Some people criticize Google because its query-analysis technology is mostly based on keywords as opposed to understanding natural language, like full sentences.