Translytical has become synonymous with real-time

Determine if your organization is combining the three principles of translytics to overcome the speed and scale challenges of today’s data-driven enterprise

data globe
Thinkstock

The term “translytical” is a recent addition to the database industry jargon. According to the recent Forrester Wave report, the term “translytics” is defined as “a unified and integrated data platform that supports multi-workloads such as transactional, operational, and analytical simultaneously in real time, leveraging in-memory capabilities including support for SSD, flash, and DRAM and ensures full transactional integrity and data consistency.”

While this may sound like any number of the different database technologies you have used over years, the key differentiator is the “trans” in translytics. A translytical database is one that can ingest and analyze data in transaction and enable real-time, in-event analytics and decisioning; this nuance is critical to an organization’s ability to take action in real time.

So what makes a database translytical? There are three key traits to consider.

Predictable, low latency at massive scale

To make decisions in-transaction, the data must be collected, analyzed, and acted on within milliseconds, so low latency is a must for a translytical database. While low latency is an obvious requirement, the important differentiator here is predictability. Traditional OLTP databases, designed decades ago, have low latency at lower scale. The problem is trivial without scale. As the scale grows, such systems start to buckle at their knees. Even if you get millisecond latency on average, having it predictably be low 99.999 percent of the time is very hard. There are numerous blocking tasks that need to be converted to asynchronous ones. The normal garbage collection can be unpredictable and introduce unwanted latency at seemingly random times. Most databases are not designed with keeping latency predictably low all the time. Thus, translytics technologies have to fundamentally include this as a part of their architecture.

For example, consider a financial institution authorizing a payment or a telecommunications company connecting a mobile call—predictability is key for ensuring a consistently quick response for consumers. Now consider that millions of credit card transactions and phone calls happen every minute, and each one must be authorized in real time. A translytical database has the ability to scale real-time decisioning and deliver on the promise of predictable, low latency every time by changing blocking tasks to asynchronous ones and avoid pitfalls of garbage collection with much better memory management techniques that eliminate long pauses between garbage collection (essentially running it slowly all the time).

Complex and accurate real-time operational analytics

In today’s database world, data analytics is a given, because database technology at its core enables data analytics. For it to rise to the level of translytical, a database must have the ability to run complex algorithms in real time, with an emphasis on “complex.” For instance, the decision to classify a credit card transaction as fraud is based on complex logic of more than 100 queries in the database, all working together to identify potential fraud patterns. From the moment the card is swiped, the database is running hundreds of input variables through this complex logic to determine whether to decline the transaction, all within milliseconds. In another example, take an online gaming platform that serves offers to players based on their current needs. That players’ status is constantly changing as they progress through the game, and so their needs evolve as well. The underlying database must be capable of processing players’ status in milliseconds to ensure the offer being shown is relevant to their needs in that moment—the right offer at the right time.

It’s not enough for a translytical database to support the SQL and transactional semantics; to make these kinds of per-event insights useful, getting context in the decision process has to be instant. This often requires features that make analytical queries instant, like collecting recent history, materialized aggregates, lookup ranks, or table joins. It may also mean bringing analytics closer to data rather than the other way around which can be slow, with features like stored procedures and user-defined functions that can make decisions millions of times per seconds at predicable low latency.

Distributed enterprise resiliency

Most modern mission-critical applications built in the cloud or on premises require resilient data services to keep their designs simple. Be it credit card fraud detection or the phone charging and billing systems, they all need their data to be present across the globe all the time (24x7x365). This requires features like high availability, disaster recovery, and multisite active-active replication to be designed as a part of the database system. High availability allows for continued operations in the scenario where a node fails. Disaster recovery is needed when an entire region is hit by a natural disaster like an earthquake or a hurricane. Multisite active replication lets data be simultaneously accessed across two geographies for applications that are geographically distributed. A lot of nontranslytical databases rely on other bolt-on software technologies for these resiliency features, which adds cost and complexity to the scenario. Translytical databases support these software technologies natively, thereby making these features a fundamental part of the application. 

Translytics are setting the new standard for data analytics, particularly for today’s modern enterprises that are looking to scale. Industries for the most part have mastered the art of connectivity and data collection in today’s IoT-driven environment—now it’s time to focus on making the most of this data in real time using the right technology.

Related:

Copyright © 2018 IDG Communications, Inc.