Why Google might lose the enterprise AI wars

Google leads in cutting edge AI research, but lags behind Amazon, Microsoft, and IBM in market share for enterprise cloud and AI solutions. To win the enterprise AI wars, the search giant must overcome two key disadvantages.

google cloud

“Google has never understood enterprise,” asserts Chris Nicholson, CEO of Skymind. By contrast, Nicholson gets how businesses think. His company builds Deeplearning4j, the leading open-source enterprise-ready library for deep learning. “You can do anything you want in consumer, because people aren’t paying. In enterprise, your customers hand you eight figure checks and expect top-notch professional service along with golf games and steak dinners.”

The three current leaders in cloud computing services - Amazon (AWS), Microsoft (Azure), and IBM (IBM Cloud) - understand this dynamic very well. While Google is widely acknowledged as the defacto leader in AI research with DeepMind and Google Brain, winning the “Machine Learning As A Service” (MLaaS) wars is much harder than simply releasing free tools like TensorFlow.

“TensorFlow is a loss leader for Google,” explains Nicholson. Google lags in share of the enterprise cloud computing market, so they hope to make up the difference by offering machine learning tools for free. Indeed, TensorFlow rapidly overtook other popular deep learning libraries like Theano, Caffe, and Torch, as measured by number of forks on Github and mentions on Stack Overflow, but these numbers are misleading. “Google has a Udacity course on deep learning where every student is required to fork TensorFlow,” Nicholson calls out. “These indie developers have no money and don’t represent true business usage.” 

Two major obstacles stand in Google’s way to cloud AI dominance: data gravity and lack of backwards compatibility. To illustrate the issue of data gravity, look no further than Amazon’s Snowball device. This chair-sized flash drive, capable of storing 80 TB of precious enterprise data, is physically shipped to a customer’s on-premise data centers to load petabytes of data and then shipped back to Amazon for upload to AWS servers. Ironically, manual transfer is significantly faster and cheaper for large data sets than any internet method. Enterprises with data-hungry AI applications will have an easier time running algorithms on-prem or on AWS and Azure, where their data already lives.

Backwards compatibility is another reason why enterprises aren’t rushing to adopt the shiniest new languages and tools that AI researchers are using. “Most enterprises run on Java Virtual Machines (JVM), even Apple,” reveals Nicholson. “Java is so popular because it is backwards compatible, whereas Python 3 doesn’t even work with Python 2.” Most deep learning libraries are designed to help AI researchers write better papers, not for deployment in production environments. Once data scientists and machine learning engineers are done with their Python prototypes, solutions like Deeplearning4j are required to port their Python code over to scalable, enterprise-friendly Java.

“To succeed in enterprise, you must have a field team of forward-deployed engineers who can build proof of concepts (POC) quickly,” Nicholson emphasizes. “Once you’ve built your customers a working deep learning solution, you make a bet that they’ll keep building on your technology and your services.”

This article is published as part of the IDG Contributor Network. Want to Join?