Many analytical dashboards are graphical clickfests -- it's great that you see the high level, but when you want to get to the next level, guess what? You have several layers of abstraction to click through. They are as byzantine as they are informative. This is particularly true where real-world assets are involved.
Take the power industry, for instance. Having worked on a few smart-grid analytics projects, this is near and dear to my heart. Great, you have an outage. You see there is a faulty asset. How do you inspect what is wrong? Obviously you can go there, but if you have most of the data in electronic form, what is the next best thing to being there? Virtual reality offers a solution.
I recently spoke to Steve Ehrlich of Space-Time Insight. Space-Time has products for the power, oil and gas, logistics, and related industries. The company has a new pilot that uses Oculus Rift (a virtual reality headset), Unity, WebGL, and other technologies to deliver a unique user experience.
In the power industry, you could have a meter outage (a type of “asset” failure in industry parlance) due to a faulty transformer. One view of this would be a dashboard or map showing an outage; then you can click down to the meter to find the real-world problem.
Ehrlich’s solution is the next best thing to being there. What if you actually see a 3D model of a transformer, with a red light flashing and flooding or smoke or whatever's wrong right in front of you? A user could act on the device without going to another application and, say, dispatch a work team.
The front end is obviously cool enough on its own, but what about the back end? According to Ehrlich, “[Space-Time’s] software is an in-memory solution, so we take big data from wherever the data is stored. If the data’s in a big data system, if it’s in Hadoop or sits in a Greenplum, or if it’s in HANA, if it’s in a traditional database, those are all different sources of information that we would use to present the data to the user.” In other words, while the company's solution is focused on specific industries, this is very much based on conventional big data visualization.
This often involves integrating disparate data sources. The solution requires geolocation and spatial data to represent objects to the user. Also, video streams and pictures are helpful in the case of physical security breeches or disasters to overlay the model to allow the user to make better decisions.
Beyond power, Ehrlich feels this technology is useful in any asset-intensive industry. “If you think about a transportational logistics company being able to get inside a plane ... to be able to interact virtually with the engine or different parts of the plane I think is very powerful and very useful ... It’s hard to describe an asset to somebody. It’s only when you really see it that you know what they’re talking about. And to be able to be there and collaborate with people on the ground and perhaps in an airport or maintenance facility is very, very powerful.”
I asked about more theoretical fields like finance and Ehrlich was more skeptical:
It’s not an area we have explored, but if you’re looking at data, something in a virtual world...[it doesn't] have a virtual presence, right? I think there's value in terms of using WebGL to look at data in a high volume, and looking at it in a more interesting and interactive way, but I’m not sure in a virtual reality world how that would work. I think in virtual reality you want something physical, you want to see a store, or you want to see a warehouse, you want to see something that you’re analyzing. So if it’s financial analysis of how is my store doing, and the revenue is dropped, you want to perhaps be able to go to that store and see that it’s snowing, and that’s why the revenue’s dropped that day. In a virtual reality world it has to be used in the context of the asset or something that you’re looking at.
A 3D VR version of that Angelina Jolie moment in the classic movie "Hackers" may not yet be upon us. However, when the technology is combined with machine learning algorithms, that could also be exciting. Ehrlich gave an example of predicting when a device would fail. For instance if you have a failing device, how long would it last if you added a bit more oil?
What do we get next? For Ehrlich, the answer lies in audio and voice -- not only seeing a broken device but being able to walk around, hear it, and walk toward it. Not only gesturing, but the Scotty in "Star Trek IV" moment, “OK, computer, dispatch the repair team,” which was way more futuristic before I started using my phone that way.
How will this kind of context change the way users behave or act upon data? Will this all become commonplace? Will it be a cute but seldom practical gimmick like Siri? Who knows, but it at least gives me a reason to expense an Oculus device for "research."