Oct 11, 2017 3:00 AM

How to use Redis for real-time metering applications

Take advantage of the open source in-memory database’s built-in data structures and commands to simplify metering at scale


Metering is not just a simple counting problem. Metering is often confused with measuring, but it is usually more than that. Metering does involve measuring, but as an ongoing process, usually with the goal of regulating the usage or flow of a resource over time. Modern applications incorporate metering in many different ways, ranging from counting people, objects, or events to regulating usage, controlling access, and allocating capacity.

Metering solutions generally must process large volumes of data while meeting stringent performance requirements. Depending on the scale of the solution, counting and metering might involve thousands if not millions of updates to a database every second. The primary requirements of a database to support such a solution are high throughput for write operations and low (sub-millisecond) latency for responses.

Redis, the open source in-memory database platform, delivers both of these benefits while also being cost-effective in terms of using minimal hardware resources. In this article we’ll examine certain features of Redis that make it a good choice for metering solutions, and how we can use Redis for that purpose. But first, let’s look at a few of the more common uses of metering.

Common metering applications

Metering is required in any application that must measure the use of a resource over time. Here are four common scenarios:

  1. Consumption-based pricing models. Unlike one-time or subscription-based payment models, consumption-based pricing models allow consumers to pay only for actual usage. Consumers enjoy greater flexibility, freedom, and cost savings while providers gain greater consumer retention.
    Implementing such models can be tricky. Sometimes the metering system has to track many items of usage and many metrics in a single plan. For example, a cloud provider may set different pricing levels for CPU cycles, storage, throughput, number of nodes, or length of time a service is used. A telecommunications company may set different levels of allowed consumption for minutes, data, or text. The metering solution must enforce capping, charging, or extending services depending on the type of consumption-based pricing.
  2. Restricting resource utilization. Every service on the Internet can be abused through excessive usage unless that service is rate limited. Popular services such as Google AdWords API and Twitter Stream API incorporate rate limits for this reason. Some extreme cases of abuse lead to denial of service (DoS). To prevent abuse, services and solutions that are accessible on the Internet must be designed with proper rate limiting rules. Even simple authentication and login pages must limit the number of retries for a given interval of time.
    Another example where restricting resource utilization becomes necessary is when changing business requirements put a greater load on legacy systems than they can support. Rate limiting the calls to the legacy systems allows businesses to adapt to growing demand without needing to replace their legacy systems.
    In addition to preventing abuse and reducing load, good rate limiting also helps with the management of bursty traffic scenarios. For example, an API enforcing a brute force rate-limiting method may allow 1000 calls every hour. Without a traffic-shaping policy in place, a client may call the API 1000 times in the first few seconds of every hour, perhaps exceeding what the infrastructure can support. Popular rate-limiting algorithms such as Token Bucket and Leaky Bucket prevent bursts by not only limiting the calls, but also distributing them over time.
  3. Resource distribution. Congestion and delays are common scenarios in applications that deal with packet routing, job management, traffic congestion, crowd control, social media messaging, data gathering, and so on. Queueing models offer several options for managing the queue size based on the rate of arrival and departure, but implementing these models at large scale isn’t easy.
    Backlog and congestion are constant worries when dealing with fast data streams. Clever designers need to define acceptable queue length limits, while incorporating both the monitoring of queuing performance and dynamic routing based on queue sizes.
  4. Counting at scale for real-time decision making. E-commerce sites, gaming applications, social media, and mobile apps attract millions of daily users. Because more eyeballs yield greater revenue, counting visitors and their actions accurately is critical to business. Counting is similarly useful for use cases such as error retries, issue escalation, DDoS attack prevention, traffic profiling, on-demand resource allocation, and fraud mitigation.

Metering design challenges

Solution architects have to consider many parameters when building a metering application, starting with these four:

  1. Design complexity. Counting, tracking, and regulating volumes of data—especially when they arrive at a high velocity—is a daunting task. Solution architects can handle metering at the application layer by using programming language structures. However, such a design is not resilient to failures or data loss. Traditional disk-based databases are robust, and promise a high degree of data durability during failures. But not only do they fall short of providing the requisite performance, they also increase complexity without the right data structures and tools to implement metering.
  2. Latency. Metering typically involves numerous, constant updates to counts. Network and disk read/write latency adds up while dealing with large numbers. This could snowball into building up a huge backlog of data leading to more delays. The other source of latency is a program design that loads the metering data from a database to the program’s main memory, and writes back to the database when done updating the counter.
  3. Concurrency and consistency. Architecting a solution to count millions and billions of items can get complex when events are captured in different regions, and they all need to converge in one place. Data consistency becomes an issue if many processes or threads are updating the same count concurrently. Locking techniques avoid consistency problems and deliver transactional level consistency, but slow down the solution.
  4. Durability. Metering affects revenue numbers, which implies that ephemeral databases are not ideal in terms of durability. An in-memory datastore with durability options is a perfect choice.

Using Redis for metering applications

In the following sections we will examine how to use Redis for counting and metering solutions. Redis has built-in data structures, atomic commands, and time-to-live (TTL) capabilities that can be used to power metering use cases. Redis runs on a single thread. Therefore, all of the database updates are serialized, enabling Redis to perform as a lock-free data store. This simplifies the application design as developers don’t need to spend any effort on synchronizing the threads or implementing locking mechanisms for data consistency.  

Atomic Redis commands for counting

Redis provides commands to increment values without the requirement of reading them to the application’s main memory.

Redis stores integers as a base-10 64-bit signed integer. Therefore the maximum limit for an integer is a very large number: 263 – 1 = 9,223,372,036,854,775,807.

Built-in time-to-live (TTL) on Redis keys

One of the common use cases in metering is to track usage against time and to limit resources after the time runs out. In Redis, one can set a time-to-live value for the keys. Redis will automatically disable the keys after a set timeout. The following table lists several methods of expiring keys.

The messages below give you the time-to-live on the keys in terms of seconds and milliseconds.

Redis data structures and commands for efficient counting

Redis is loved for its data structures such as Lists, Sets, Sorted Sets, Hashes, and Hyperloglogs. Many more can be added through the Redis modules API.

Redis Labs

Redis data structures.

Redis data structures come with built-in commands that are optimized to execute with maximum efficiency in memory (right where the data is stored). Some data structures help you accomplish much more than the counting of objects. For example, the Set data structure guarantees uniqueness to all the elements.

Sorted Set goes a step further by ensuring that only unique elements are added to the set, and allowing you to order the elements based on a score. Ordering your elements by time in a Sorted Set data structure, for example, will offer you a time-series database. With the help of Redis commands you could get your elements in a certain order, or delete items that you don’t need anymore.

Hyperloglog is another special data structure that estimates counts of millions of unique items without needing to store the objects themselves or impact memory.

Redis persistence and in-memory replication

Metering use cases such as payments involve storing and updating information that is critical to businesses. Loss of data has a direct impact on revenue. It can also destroy billing records, which are often a compliance or governance requirement.

You can tune consistency and durability in Redis based on your data requirements. If you need a permanent proof of record for your metering data, you can achieve durability through Redis’s persistence capabilities. Redis supports AOF (append-only file), which copies write commands to disk as they happen, and snapshotting, which takes the data as it exists at one moment in time and writes it to disk.

Built-in lock-free Redis architecture

Redis processing is single threaded; this ensures data integrity, as all the write commands are automatically serialized. This architecture relieves the developers and architects from the burden of synchronizing threads in a multithreaded environment.

In the case of a popular consumer mobile application, thousands and sometimes millions of users might be accessing the application simultaneously. Let’s say the application is metering the time used, and two or more users can share minutes concurrently. The parallel threads can update the same object without imposing the additional burden of ensuring data integrity. This reduces the complexity of the application design while ensuring speed and efficiency.

Redis metering sample implementations

Let’s take a look at sample code. Several of the scenarios below would require very complex implementations if the database used was not Redis.

Blocking multiple login attempts

To prevent unauthorized access to accounts, websites sometimes block users from making multiple login attempts within a stipulated time period. In this example, we restrict the users from making more than three login attempts in an hour using simple key time-to-live functionality.

The key to hold the number of login attempts:

user_login_attempts:<user id>


Get the current number of attempts:

GET user_login_attempts:<user id>

If null, then set the key with the expiration time in seconds (1 hour = 3600 seconds):

SET user_login_attempts:<user id> 1 3600

If not null and if the count is greater than 3, then throw an error:

If not null, and if the count is less than or equal to 3, increment the count:

INCR user_login_attempts:<user id>

Upon a successful login attempt, the key may be deleted as follows:

DEL user_login_attempts:<user id>

Pay as you go

The Redis Hash data structure provides easy commands to track usage and billing. In this example, let’s assume every customer has their billing data stored in a Hash, as shown below:

customer_billing:<user id>
     usage<actual usage in the unit which the billing is based upon>
     cost<cost billed to the customer>

Suppose each unit costs two cents, and the user consumed 20 units. The commands to update the usage and billing are:

hincrby customer:<user id> usage 20
hincrbyfloat customer:<user id> cost .40

As you may have noticed, your application can update the information in the database without requiring it to load the data from the database into its own memory. Additionally, you could modify an individual field of a Hash object without reading the whole object.

Please note: The purpose of this example is to show how to use the hincrby and hincrbyfloat commands. In a good design, you avoid storing redundant information such as both usage and cost.

Rate limiting

Rate limiting regulates how much or how frequently a resource is used or accessed by consumers. In the following three examples, we demonstrate increasing levels of sophistication in allocation of resource usage. The examples are all written in Java. In all three cases, cells (the messages or the items) arrive at random times. The business use case sets a limit on how many cells can arrive in a stipulated time window. The algorithm maintains the history of cell arrivals and decides whether the new cell is good to go or not. The diagram below shows the design of the solution. All the code samples can be accessed at https://github.com/redislabsdemo/RateLimiter.

Redis Labs

Class diagram of RateLimiter and its sub-classes. 

Example 1: BurstyRateLimiter

In this implementation, the rate limiter allows cells to arrive in bursts. For example, if the use case has a rule that allows 1,000 cells in an hour, this class allows up to 1,000 cells in the first second. In other words, this algorithm doesn’t check for bursts, but allows the specified number of cells within a time window. The solution uses Redis’s Sorted Set data structure.

public class BurstyRateLimiter extends RateLimiter
       private String key ="BurstyRateLimiter"; //default key
       private int window = 3600; // in seconds. 1 hr default
       private int actions = 360; // 1 action every 10 seconds
       // Returns true if the arrival of the cell is within the acceptance
       // rate, false if not.
       public boolean arrival(String cell){

              long currentTime = System.currentTimeMillis();
              long lastWindow = (currentTime - (window * 1000))/1000;

              //delete all messages outside the window
              jedis.zremrangeByScore(key, “0”, Long.toString(lastWindow));

              //is zcard less than the allowed number of actions
              long card = jedis.zcard(key);

              //If yes, add message to the sorted set and return true
              if(card < actions){
                      jedis.zadd(key, (currentTime/1000), cell);

                      return true;

              return false;

Example 2: UniformRateLimiter

The UniformRateLimiter class avoids bursts by spreading out the arrival of cells throughout the time window. For example, if the use case allows 60 cells in an hour, this class allows only one cell every minute. This uses the time-to-live feature of Redis keys.

public class UniformRateLimiter extends RateLimiter

       private String key ="UniformRateLimiter"; //default key
       private int window = 3600; // in seconds. 1 hr default
       private int actions = 360; // 1 action every 10 seconds
       private int interval = window/actions; //
       // Returns true if the arrival of the cell is within the acceptance
       // rate, false if not.      
       public boolean arrival(String cell){

              // check if the last message exists.
              long ttl = jedis.ttl(key);
              if(ttl > 0){
                      return false;

              // The key lives through the period defined by the interval
              if(key != null && cell != null){
                      jedis.setex(key, interval, cell);                
                      return true;
              return false;

Example 3: SimpleCellRateLimiter

Unlike the previous two examples, this algorithm queues up all of the cells in Redis’s List data structure. It then uses the UniformRateLimiter class to determine when to pop a cell from the List.

public class SimpleCellRateLimiter extends RateLimiter implements Runnable{   
       private String key ="SimpleCell"; //default key

       private RateLimiter uniformRateLimiter = null;

       public SimpleCellRateLimiter(Properties props) throws Exception{
              props.setProperty(“type”, RateLimiterFactory.DISTRIBUTION_TYPE_UNIFORM);
              uniformRateLimiter = RateLimiterFactory.getRateLimiter(props);
              Thread t = new Thread(this);

       // This arrival method is different from the previous two examples.
       // In this example, the program queues up the cell in a List
       // data structure. A separate thread checks whether popping the cell
       // is within the acceptance rate.
       public boolean arrival(String cell){
              // Push the cell to the List
              jedis.lpush(key, cell);            
              return true;

       public void run(){

                      RedisConnection conn = RedisConnection.getRedisConnection();
                      Jedis jedis = conn.getJedis();

                             String cell = jedis.rpop(key);
                             boolean success = false;
                                    success = uniformRateLimiter.arrival(cell);

                                           // Take action here..... 

                                           System.out.println(“Out: “
                                            +(System.currentTimeMillis()/1000)+” “

              }catch(Exception e){

Note that Redis has a module for rate limiting called Redis-cell. Redis-cell implements the sophisticated Generic Cell Rate Algorithm and provides a single command to manage rate limiting (http://redismodules.com/modules/redis-cell/). It offers a language agnostic rate limiter for applications that have Redis as part of their software stack. As a Redis module, redis-cell enjoys full benefits of Redis’ in-memory processing, internal data structures, scalability, and high-availability. It plugs into open source Redis 4.0 seamlessly.

Redis offers powerful tools and capabilities to meet the metering needs of various scenarios. It offers highly optimized data structures and commands that simplify counting and metering at scale. Its single-threaded processing gives you a lock-free architecture by default, while its atomic metering commands ensure data consistency. And Redis has been shown to deliver millions of operations per second with sub-millisecond latency on modest hardware, helping you to address metering implementation challenges in a cost-effective way.  

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.