InfoWorld's 2016 Technology of the Year Award winners

InfoWorld editors and reviewers pick the best hardware, software, development tools, and cloud services of the year

2016 Technology of the Year Awards
Jane Van Ginkel

2016 Technology of the Year Awards

If you described 2015 as the year of the container, you wouldn’t be wrong. But 2015 was also a big year for distributed computing, in-memory analytics, machine learning, platform as a service, real-time apps, single-page apps, low-code mobile development, software-defined networking ... you get the picture.

Everything is changing. Our 2016 Technology of the Year Award winners -- 31 products in all, handpicked by InfoWorld’s small army of reviewers -- are the platforms, databases, developer tools, applications, and cloud services that are reshaping IT and redefining the modern business.

See also: 

Technology of the Year 2016: The best hardware, software, and cloud services

TOY 2016 Docker


Naturally, Docker is a Technology of the Year shoo-in not only for the agility it brings to developers, but for how far the project has come in the course of the past year. Docker’s entire networking model has been upgraded. Its handling of storage has been reworked to such a degree that it is now forming the basis for a cottage industry of third-party products. It has finally done away with the need to run containers as root. And it has gained a smorgasbord of additional tooling.

Also striking is the way Docker has influenced the direction and development of not only other software projects, but entire software industries. VMware sensed, correctly, that containers were providing a better solution to many of the problems VMs were originally meant to solve, and reworked much of its product line to welcome containers as first-class citizens. Microsoft too saw the light and set to work adding container capabilities to Windows Server -- and not merely a clone of the functionality, but support for Docker itself.

Google, Amazon, Red Hat, IBM, Cisco -- every data center and cloud vendor is catching Docker fever. It has been a long time since any one piece of software came along that had such a transformative effect, and it will be nothing short of fascinating to watch how Docker and its partners continue to shepherd its evolution through the coming year.

-- Serdar Yegulalp

TOY 2016 Kubernetes


Managing the deployment of Docker containers can be a headache, especially when an application or microservice is designed to be deployed with its own dependent services. That’s where Kubernetes comes in. Kubernetes is an orchestration tool, similar to Marathon running on Apache Mesos, but only for Docker containers. It schedules containers to run across a cluster of machines, deploying them individually or in tightly coupled groups called pods, and keeping resource needs in mind as it distributes the work.

With Kubernetes, unlike the Mesos/Marathon combination, you get service discovery for free in the form of etcd, which is a nice feature to have out of the box. Kubernetes also provides quite a bit of flexibility regarding the underlying platform. It powers Google’s Container Engine, runs easily on a handful of public clouds, and can be deployed on VMware vSphere, Mesos, or Mesosphere DCOS.

If you run Kubernetes on Google Compute Engine, Amazon Web Services, or another public cloud, you can even let Kubernetes manage some of that infrastructure for you. Have a service you need to horizontally scale? Chances are, you’ll want a load balancer in front of that service. Instead of having to provision it yourself, you can tell Kubernetes you want your service to be load-balanced and it will do the cloud-specific work for you. Features like that make Kubernetes the front-running candidate for any container infrastructure.

-- Jonathan Freeman

TOY 2016 CoreOS


Based on a heavily stripped-down version of Gentoo Linux, CoreOS is specifically designed for running containers, particularly in clusters. A companion management tool called fleet handles the scheduling of CoreOS instances across the cluster, while a distributed key-value store called etcd stores configuration data and supports service discovery.

The promise of container portability and efficiency has caught like wildfire, and CoreOS is built from the ground up with that promise in mind. The only items that run on CoreOS run as Docker containers, including the debugging tools you may have been taking for granted. There isn’t even a package manager installed. CoreOS is also designed to be deployed as a part of a distributed system, where hardware failures are expected and high availability is achieved by deploying multiple instances of services. If a node fails, fleet and etcd combine to ensure that replicas are quickly deployed elsewhere in the cluster.

As a side effect of this distributed systems approach, CoreOS automatically installs updates and patches. When a new security vulnerability surfaces and a patch is release, CoreOS will perform a rolling update across your cluster with the fix. That sort of rolling update is only made possible through the orchestrations of etcd and fleet.

-- Jonathan Freeman

TOY 2016 Joyent Triton

Joyent Triton

You’ve never heard of it before, right? However, if you want a manageable, container-oriented architecture that is production ready and proven today, Joyent is a name to know. Joyent is a company with an impressive technology portfolio, and Triton -- software for running Docker-compatible containers on bare metal -- is the cornerstone.

If you used Solaris Zones on a Sun E10K years ago, then got stuck with VMs after the industry dumped Solaris for Linux, you probably felt a bit robbed. Docker brings containers back, but other than the packaging, Docker containers aren’t quite as nice as Zones. Joyent continues down the Solaris Container path but gives you a Linux-like feel, Linux and Docker compatibility, and all of the management tools necessary in the new era of IaaS, cloud apps, and “containerization.”

The upshot? Running containers on Joyent Cloud is cheaper than running containers on a virtualization-based cloud provider. If you want to run Docker on premises, you have a proven infrastructure solution with the management tools needed to make it work. Best of all, the key pieces are open source. This is about to become a crowded market, but Joyent got there early with arguably better technology. Let’s hope that matters!

-- Andrew C. Oliver

TOY 2016 Cisco ACI

Cisco ACI

Cisco has taken data center networking to new heights with its Application Centric Infrastructure, which is software-defined networking (SDN) writ large. ACI does not mesh with OpenFlow or the OpenDaylight SDN initiative, but takes an entirely different approach, whereby network configuration details are handled by network devices rather than controllers. The result is significant scalability, with Cisco bragging about supporting 200 leaf switches, five cores, and 200,000 endpoints on a single fabric.

ACI is built around a tenant model that encapsulates all traffic within the tenant purview, so there are no issues with overlapping IP subnets, and tenants are logically separated from other tenants, though they reside on the same fabric. Application models, which link server, network, and security resources, allow tenants to deploy apps from templates. These abstractions allow for straightforward management of hosted services or business units.

Orchestrating all of this is a cluster of controllers, which are servers that drive the API for integrating ACI workflows and for controlling the fabric itself. This API comes with a GitHub project that open-sources some ACI functions and provides for open integration options. Built on a Python foundation, ACI is likely the most open platform Cisco has ever released. Open, scalable, and software-defined, Cisco ACI is yet another sign of the sea change in networking brought about by SDN.

-- Paul Venezia

TOY 2016 Apache Mesos

Apache Mesos

When distributing many tasks across a large cluster of machines, the essential question becomes how to do it efficiently. Mesos, a top-level Apache open source project, abstracts the computational resources across a distributed cluster and offers slices to your applications, based on sharing priorities you set. In other words, Mesos acts like an operating system for the data center, distributing work across multiple machines without your having to manage and monitor resources on those machines yourself.

Chances are, you’ll be consuming Mesos through one of the many available frameworks written on its APIs. For instance, Marathon distributes long-running tasks with varying resource requirements across your cluster. Chronos allows you to run cron jobs in a fault-tolerant manner. Other popular frameworks for Mesos include Cassandra, Hadoop, Storm, and Spark.

If you need an extra push to check out Mesos this year, look at some of the companies that have put their weight behind Mesos. Twitter scooped up one of the co-writers of the original Mesos project and is running its data center with Mesos. Apple rebuilt the back-end components powering Siri using Mesos. Cisco Cloud Services is actively involved in building out frameworks for running Consul, Exhibitor, and Elasticsearch on Mesos. If you have many servers in production and lots of jobs to schedule, you may be able to get considerable benefits from incorporating Mesos into your system.

-- Jonathan Freeman

TOY 2016 Apache Spark

Apache Spark

Although it happened right in front of our eyes, it’s still hard to believe how quickly Spark upended the big data landscape. In a matter of months, in-memory analytics stopped being a specialty “rich person’s game” and became an everyday option that could be incorporated into any business intelligence project. Having elbowed Hadoop into a corner and obviated the need for MapReduce, Spark has basically altered the primary definition of big data.

The capstone was laid when Doug Cutting, creator of Hadoop and CTO of Cloudera, announced that Cloudera was making Spark the center of its universe. Oddly, the loudest noise around Spark is for real-time streaming, which is in fact one of its biggest weaknesses. But who knew that a simple API for distributed computing that works in-memory and is a natural fit for any functional programming language would turn out to be the cornerstone of a new era in data processing. Spark’s rapid rise and adoption will only continue to accelerate in 2016.

-- Andrew C. Oliver

TOY 2016 IBM Watson

IBM Watson Analytics

IBM is known as your grandfather’s technology company for good reason. Sooner or later, it will try to sell you a big, expensive box with custom innards. As for IBM software, the mere mention of WebSphere makes any developer or system administrator cringe in horror. But the Watson platform, which was introduced via a marketing stunt on “Jeopardy,” is a brand-new Big Blue. IBM is stamping Watson on a lot of things, but the most impressive, nonvaporous entry is Watson Analytics.

Watson Analytics is a wonder. It combines visualization with data tagging, machine learning, and cloud storage. Upload some data, and out will come interesting insights. However, while Watson is free to try, once you start crunching a real amount of data Watson gets really expensive, really quickly. Moreover, its integration with other clouds and databases is limiting if you don’t want to pay $15,000 a month for a terabyte of storage.

That said, Watson Analytics is probably a market definer -- a tool by which most other predictive analytics tools will be measured. If IBM breaks Watson out of its box, it could own this sector of the market. But then, how would IBM sell you a big black box if you don’t want to run it in the company's cloud?

-- Andrew C. Oliver

TOY 2016 Splunk


Log analytics isn’t the sexiest part of information security, but it is a foundational one. Whether for threat detection, incident response, or forensics, organizations must log the right level of detail, find the relevant pieces of information in those logs, and somehow make sense of all of those pieces. Where should they turn?

A logical choice would be the undisputed king of log analytics, Splunk, with its detailed search tools, comprehensive interface, and global coverage. Being at the top of the heap hasn’t made Splunk complacent. In 2015, the company came out with Splunk Enterprise Security 4.0 and Splunk User Behavior Analytics, a combination of tools that allow organizations to detect breaches and track attackers’ steps through the network.

Splunk Enterprise Security helps security professionals defend against multistage attacks with tools to track ad hoc searches and activities; tools to place any event, activity, or annotation within an investigation timeline; and a framework that allows customers and third parties to extend the system by creating additional apps. Splunk User Behavioral Analytics is a new product that combines Splunk’s machine data capabilities with user behavioral analytics technology from the Caspida acquisition.

Today’s sophisticated attacks call for more sophisticated approaches to security. With these new offerings, along with the new Splunk IT Service Intelligence to visualize operational health, Splunk is answering that call by bringing analytics and machine learning to bear on threat detection and response.

-- Fahmida Y. Rashid

TOY 2016 Tableau


Tableau is the prime exemplar of the business-user-driven data discovery and interactive analysis trend in BI that has largely taken over from traditional IT-driven reporting and analytics. Tableau can connect to a wide assortment of file and server data sources, including Excel workbooks, character- and tab-delimited files, statistical files, and upward of 40 server types. You can connect multiple data sources to a worksheet and create joins between tables or files.

Analysis in Tableau is a drag-and-drop process with property sheets. Tableau organizes your analyses into worksheets, dashboards, and stories. Dashboards can contain an arrangement of multiple worksheets, and you can create actions for one worksheet that affect other worksheets in the dashboard.

Stories combine multiple dashboards or worksheets into a narrative. Anyone can download the free Tableau Public app for Windows or Mac to create analyses to save to the Tableau Public server, which anyone can view. Tableau Server is a private Windows-hosted version of Tableau Public.

Tableau isn’t cheap, nor is it the best BI product for reporting purposes. But it supports a wide assortment of data sources, offers a large selection of chart types, provides excellent control over chart and dashboard appearance, and makes deep statistics available without writing code. And considering the complexity of the product, it’s not hard to learn. Tableau supplies sample data, videos, quick starts, live classes, and webinars to help people get up to speed.

-- Martin Heller

TOY 2016 Microsoft Office

Microsoft Office

This isn’t a big wet kiss for Office 2016, exactly. In our book, Microsoft Office achieved a much bigger milestone in the past year than the "meh" update to the Windows version or even the huge improvement to the Mac version. It was an achievement once considered unimaginable: good, capable versions of Word, Excel, and PowerPoint across every major computing platform. The Office apps for Windows, Mac, iOS, and Android -- as well as the Web -- are not only mature and stable, but share the same large core set of capabilities.

Furthermore, new and worthwhile cloud-backed features for the multiple platforms are expanding at a breathtaking rate: integration with the Box, Citrix, Salesforce, and iCloud storage services; mobile device management; the Sway “post-PowerPoint” presentation developer; Office 365 Video for centralized video sharing; Office Graph and Office Delve for information discovery across Office 365 workgroups and documents; Power BI to bring Excel reports together.

Microsoft still has plenty of work to do on the Office 365 back end. The collaboration features fall far short of world-class. Real-time co-authoring, for example, only works in Word -- and not very well. Acquired pieces, such as Yammer, still seem more bolted-on than integrated. Promised improvements to OneDrive for Business remain promises.

The pieces of Office 365 have many flaws, but the goal -- to embrace all of the major computing platforms with credible, complex, cloud-friendly products -- is beyond ambitious, and the progress has been remarkable.

-- Woody Leonhard

TOY 2016 Slack


A bit shy of two years old, Slack already boasts more than 2 million active daily users and is spreading beyond the realm of startups and into the enterprise.

The key to Slack’s appeal is the plethora of integrations that allow Slack to become the main hub of your business. Everything from answering support questions to fully automating cloud deployments can be accomplished within a Slack window, all while a Slack bot helpfully points out that you need to leave in 10 minutes to catch the next bus to get home.

Slack is poised to expand even further in 2016 with the recent launch of the Slack App Store and the long-awaited Enterprise version, which will bring much-needed support for organizational grouping and a unified approach to security and administration.

-- Ian Pointer

TOY 2016 Adobe Connect

Adobe Connect

WebEx and GoToMeeting have nothing on Adobe Connect, a high-end Web conferencing, webinar, and learning solution with persistent, customizable rooms arranged from pods. Pods are little windows with specific functionality, which are similar to what are called widgets in other contexts.

A host can arrange pods into layouts, and save and load layouts at will. The multiple layout feature allows the host to have persistent layouts for different audiences and activities. A second display area is visible to hosts and presenters, but not attendees. You can upload course materials and curricula for virtual classrooms.

In addition to persistent personal rooms with multiple chats, notes, pods, layouts, and so on, Connect supports persistent team rooms. That solves at least two problems: It enables a meeting to go on even if the manager is away or out sick, and it allows team members to refer to and work on shared meeting notes and diagrams stored in the team room at any time. Hosts can also create breakout rooms and assign attendees to them. Connect provides Web registration for webinar events.

You can create meeting recordings in the cloud, edit the recordings, and play them back at any time. You can also export meeting recordings in common formats and control the quality of the exported recording.

Connect requires Flash and an unsandboxed plug-in for best video performance. But its deep and expandable functionality makes it our first choice among Web conferencing systems.

-- Martin Heller

TOY 2016 Cloudera

Cloudera Impala

Connect Hive to your favorite SQL visualization tool and watch it time out. Turn to Impala, a MPP (massively parallel processing) SQL query engine built on Hadoop, and watch the big answers to your big data questions march into view. Basically, if you want to do BI on Hadoop, Impala is the most accessible, and easily implementable, of the performant solutions.

Impala has always been “the weird thing in Cloudera’s GitHub” or “part of the Impala platform,” but it is now joining Apache with the rest of the Hadoop ecosystem. And while Cloudera is now saying Spark is the “One Platform,” it is continuing to develop this MPP/BI Hadoop space by adding Kudu. Cloudera still thinks Impala is necessary, and we couldn’t agree more.

If you’re done with paying your proprietary MPP provider out the nose, and paying a premium for proprietary hardware to go with it, or if you recently built your data lake but can’t wait for Tableau to draw any longer, then Impala might be the animal you are looking for.

-- Andrew C. Oliver

TOY 2016 Apache Kafka

Apache Kafka

If you look closely at most big data streaming applications these days, you'll find Kafka playing a quiet but critical role, getting the job done without fuss. Created by LinkedIn to handle its intense messaging loads, Kafka now boasts installations at companies such as Netflix, Uber, Goldman Sachs, and Cisco, handling billions of events per day without breaking a sweat.

Kafka achieves high rates of sustained message throughput while maintaining durability through a distributed commit log. High throughput comes from Kafka's intelligent partitioning of incoming data streams, which enables parallel reads and writes. These partitioned data streams are then copied to a configurable number of replicas, which in turn are written to disk, preventing data loss and enabling an ability to "replay" the history of the data stream.

Developers can be up and running with publishing or subscribing to Kafka streams in less than an hour. The coming year should be another great one for Kafka. Version 0.9, released at the end of 2015, adds Kerberos authentication, quotas, and Kafka Connect, enabling ETL pipelines to be created with little to no custom coding required. These new features should help Kafka spread even further throughout the enterprise world.

-- Drew Nelson

TOY 2016 Apache Ambari

Apache Ambari

When I first saw Ambari I was impressed only because I knew how complicated Hadoop was. Beyond monitoring and the initial configuration, Ambari typically failed to modify deployments. It also didn’t do anything beyond a basic non-HA deployment. But in time Ambari has gone from “amazing it works at all” to “amazing accomplishment.”

For the moment, Ambari is mainly what you use for Apache Hadoop or the Hortonworks distribution. You will also have to use Cloudera Manager for the big data tools favored by Cloudera (that is, Impala) and not Hortonworks. However, Ambari has continually supported more of the big data platform formerly known as “Hadoop” and gone deeper with the its tools.

Today Ambari is a pluggable monitoring and management system for a complex, diverse, and ever-growing platform that, unlike most similar tools, actually works.

-- Andrew C. Oliver

TOY 2016 Python

Python 3.5

With Python 3.5, the venerable, popular, powerful, and easy-to-use language took another leap forward in 2015. Coroutines, the ability to quickly and efficiently write asynchronous code, landed in Python as a native part of the language’s syntax, rather than as a tacked-on afterthought that left many Pythonists hungry for more.

Also new to Python this past year: type hinting, for the sake of code analysis and linting, and a matrix multiplication operator, making life easier for users of NumPy and other math-and-stats systems. Plus, projects to give Python a performance boost -- PyPy, Pyston, and Nuitka -- all saw major progress.

The tide is finally turning in favor of Python 3 as the preferred version of the language. Not only are the vast majority of the most popular Python packages now compatible with Python 3, but Linux distributions such as Fedora started bundling Python 3 as the default Python interpreter. The language’s future looks so bright that shades are mandatory, not optional.

-- Serdar Yegulalp

TOY 2016 PHP


PHP had been getting a bit long in the tooth. The server-side scripting language, and the veritable DNA of modern Web hosting, was buckling under the weight of its own cruft. Resorting to third-party hacks, like Facebook’s HHVM, had become necessary to avoid the logjams.

PHP 7 brings a fresh reboot to the 20-year-old language, combining a remarkable speed improvement with lower runtime resource requirements. The new execution engine integrates PHPNG (Next Generation), which is based on the Zend Engine, for a 2X boost in performance benchmarks. In short, PHP 7 leaves PHP 5.6 in the dust.

On top of the speed boost, PHP 7 can halve your hardware requirements right out of the box, with few backward-compatibility issues. Developers gain some features as well, including return type declarations and broader scalar typing support, which will help reduce the time needed to write cleaner and safer code.

PHP has yet to introduce a JIT compiler like that in HHVM, but it’s on the to-do list. Plus, with the groundwork laid for asynchronous programming and multithreaded Web servers, the future for PHP is looking more promising than ever.

-- James R. Borck

TOY 2016 JetBrains PhpStorm

JetBrains PhpStorm

When it comes to easing PHP development, JetBrains PhpStorm is the tool at the top of the list. Released in November, PhpStorm 10 brings new tools for faster code analysis and inspection, letting you quickly trace the stack and drill into the maze of functions and script files that would otherwise grind server-side debugging efforts to a halt. It’s a real sanity saver for large-scale, multifile projects.

PhpStorm 10 also introduces full scaffolding for PHP 7, including code completion and refactoring. You’ll find the new tools for assessing backward compatibility to be essential for migrating old PHP codebases forward. Other niceties, like live variable inspection, help to streamline debugging.

A newly added Docker plug-in brings in-project support for Docker and container management into the IDE. New Web technologies get some love as well: Flow, Angular 2, Node.js inspections, and ECMAScript 2015 are all supported. If there were any doubt that PhpStorm is the go-to IDE for modern PHP development, the PhpStorm 10 lays them to rest.

-- James R. Borck

TOY 2016 Rust


When Mozilla Labs hatched the Rust programming language, the plan sounded like the proverbial six impossible things before breakfast: Make it possible to program directly to the metal, but by way of a language that gives you the convenience and safety of higher-level languages. It's a tall order to fill, especially after so many other programming languages had tacked around those issues.

With Rust 1.5, the dream is starting to become a reality. Rust is already at the center of one major project: Mozilla’s “Servo” rendering engine, designed to eventually replace the rendering components within Firefox. But beyond Mozilla’s walls, software developers have started to gravitate to the language. Despite Rust’s complexities, those who get inside are welcomed both by the Rust community and by the language’s convenient and common-sense approach to programming. The curious should not only check out the language itself, but this curated list of resources for all things Rust.

-- Serdar Yegulalp

TOY 2016 React


React is an open source JavaScript UI library that came out of Facebook a few years ago. The self-described V of client-side MVC, React boasts a speedy rendering mechanism that is built on an implementation of a virtual DOM. This abstraction lends itself to code that is more concise and easier to reason about. React is not an opinionated front-end framework, so it can be easily combined with other libraries and frameworks you might be using.

While React can be written in raw JavaScript, you are encouraged to write your applications in JSX, a JavaScript extension that peppers HTML-like syntax into your JavaScript code. You can use HTML elements alongside your custom elements, all of which are defined in-line, right where your logic for these elements is defined. Having built a solid UI library that gained a huge amount of traction in the past year, the React team followed up with the introduction of React Native in 2015. React Native implements React with native iOS and Android controls, giving mobile developers the ability to recruit native components for better looking and better performing apps.

-- Jonathan Freeman

TOY 2016 RethinkDB


Like MongoDB or Couchbase, RethinkDB is a clustered, document-oriented database that delivers the benefits of flexible schema, easy development, and high scalability. Unlike those other document databases, RethinkDB supports “real time” applications with the ability to continuously push updated query results to applications that subscribe to changes.

In this case, a “real time” application is one that must support a large flow of client requests that alter the state of the database and keep all clients apprised of those changes. RethinkDB expends a great deal of effort ensuring that data change events are quickly dispatched throughout the cluster. And it provides this high-speed event processing mechanism while offering plenty of control over database consistency and durability. Queries can include table joins, subqueries, geospatial queries, and aggregation.

RethinkDB doesn’t give you ACID compliance or strong schema enforcement. But its real-time push technology is ideal for underpinning applications that must provide clients with the most up-to-date view of database state. Further, its easy-to-grasp query language -- embedded in a host of popular programming languages -- and its out-of-the-box management and monitoring GUI allow for a smooth on-ramp to learning how to put RethinkDB to work in such applications.

-- Rick Grehan

toy 2016 raspberry pi

Raspberry Pi Zero

Raspberry Pi has been lowering the barrier to entry to programmable computers for years, and this year the project made another big jump. The newest edition to the Pi family is the Raspberry Pi Zero, which costs an unfathomable $5. The goal of the Raspberry Pi Foundation is to encourage kids to learn how to code and help schools incorporate computer science courses into their curriculum more easily. The new, low-priced Pi Zero not only opens the world of programming to an even wider audience, but provides solid hardware for IoT hacking.

Understandably, the Pi Zero, with a single core 1GHz ARM11 processor and 512MB of memory, has less power than its older sibling, the Pi 2. Its form factor is also considerably smaller. In fact, you can’t even hide a $20 bill behind four of them. As with earlier versions of the Pi, all you get is the board, so you’ll have to pick up a power cable, keyboard, mouse, SD card, and an enclosure if you want it. Plug it into any HDMI source to get video, install the Raspbian OS, and start hacking!

-- Jonathan Freeman

TOY 2016 Red Hat OpenShift

Red Hat OpenShift

Red Hat’s OpenShift was already our favorite open source PaaS. After all, it supported a wide assortment of languages, Web frameworks, databases, and application stacks. It provided fast self-service deployment and automatic scaling. It made updating an application as easy as a git push.

With version 3, OpenShift has been completely rewritten to use Docker containers instead of “cartridges” and “gears.” While some desirable features (including gear idling and automatic scaling) have been deferred to future versions, the PaaS is still robust, outstandingly easy to use, and highly scalable. And now it has Docker containers at the core.

You can control OpenShift from the oc command line or from a Web console. To create and deploy an OpenShift app, you typically start with a Docker image and some code from a Git repository, and use the Source-to-Image service to combine them into a ready-to-run image. Then you set the environment and expose the service to create a route to it. You can scale the app by increasing the number of desired replicas; the replication controller will see that the app needs more pods and run them.

You can automatically update the service every time your repository changes by installing a Web hook in your repository, and easily roll back to a previous version by editing the route using a one-line oc command. Application templates speed deployments by specifying the desired configuration in a YAML or JSON file.

The switch to Docker containers hasn’t opened up as many images for use in OpenShift Enterprise 3 as one might have expected, due to security issues. But once a future version of OpenShift implements a sandbox for images that want to run as root, that should be fixed. For both developers and operators, OpenShift is fulfilling the promise of PaaS.

-- Martin Heller

TOY 2016 Amazon Aurora

Amazon Aurora

The 20-year-old MySQL database is the world’s most widely used open source RDBMS. MySQL will give you good read performance in a multi-user, multithreaded scenario until your application becomes big enough to push the limits of the database. Then it gets tricky: You can add replicas, shard the schema, or upgrade to a pricy commercial database.

Or you could turn to Amazon Aurora. An affordable, high-performance, highly scalable plug-in replacement for MySQL 5.6, Aurora is an attractive option for Web applications that have outgrown MySQL and a possible alternative to Oracle Database or Microsoft SQL Server for applications that don’t need the special features of those databases.

Amazon claims that Aurora can deliver up to five times the throughput of standard MySQL running on the same hardware with 99.99 reliability, and we don’t doubt it. Amazon reports 535,000 read requests per second totaled from four SysBench clients, as well as 101,000 write requests per second. In InfoWorld’s own benchmarks, we observed 493,000 read requests per second, slightly shy of what Amazon reported, and 205,000 writes per second, nearly twice what Amazon reported for the test.

It’s a level of performance far beyond any I’ve seen from other open source SQL databases, and it was achieved at far lower cost than you would pay for an Oracle database of similar power. And you get low read-replication lag time and fast crash recovery in the bargain.

-- Martin Heller

TOY 2016 AWS Lambda

AWS Lambda

Imagine being able to define a function that lives in the cloud and handles the designated workloads, without your having to worry about provisioning servers, allocating RAM, scaling the number of instances, or configuring load balancing. With AWS Lambda, you can do exactly that.

AWS Lambda is a compute service that runs function code in Node.js, Java, and Python without provisioning or managing servers, with essentially unbounded scalability. You are only charged for the gigabyte-seconds you use. You can set up Lambda functions to respond to events, such as changes to data, asynchronously; you can call them directly or through an API gateway, synchronously; and you can use them to respond to HTTP(S) requests, synchronously.

Amazon justifiably touts Lambda’s ability to extend other AWS options with custom logic; its ability to help you build custom back-end services; its completely automated administration; its built-in fault tolerance; its automatic scaling; its integrated security model; its invitation to bring your own code; its pay-per-use business model; and its flexible resource model.

Lambda’s long list of features is by no means complete, but it’s an excellent option for many common operations -- and a welcome alternative to provisioning VMs or containers.

-- Martin Heller

toy 2016 microsoft azure

Microsoft Azure App Services

We were already impressed with Azure Mobile Services. Then Microsoft trotted out Azure App Services, a managed service that integrated the Azure Mobile Services, Azure Websites, and Azure BizTalk Services into a single service, as well as added new capabilities enabling integration with on-premises or cloud systems. The result was an even bigger step forward for cloud-oriented developers.

Azure App Services include the tools and services needed to build four app types: Web Apps, Mobile Apps, API Apps, and Logic Apps. Web App services support .Net, Node.js, PHP, Python, and Java. They can be autoscaled and traffic managed, as well as set up for continuous integration with multiple staging slots.

Mobile App services have a back end in C#/ASP.Net, supporting Windows Phone, iOS (Objective-C or C#/Xamarin), and Android (Java or C#/Xamarin). Mobile Apps can be integrated with on-premises and SaaS systems through connectors of various stripes.

API App services use Swagger and REST as pluggable interfaces, and JSON as the interservice data format. Logic App services can visually compose Connectors and other API Apps into a business process.

Azure App services makes it easier to create scalable Web and mobile app back ends on Azure, to compose services on Azure, and to integrate Azure apps with systems of record. At the same time, it lowers the cost of running app back ends. Who said cloud-based, back-end service integration had to be painful?

-- Martin Heller

TOY 2016 Microsoft Visual Studio

Microsoft Visual Studio 2015

Visual Studio has always had a raft of features that grew with each release. Visual Studio 2015 extends that trend in surprising ways.

Cross-platform mobile app development? It supports Xamarin and Cordova, with extra credit for portable C++ and integration with Unity.

Cross-platform servers? It has .Net Core, ASP.Net, and Entity Framework, along with Python and Node.js.

Cross-platform editing and debugging? Definitely. The surprisingly capable and lightweight Visual Studio Code runs on Mac OS X, Linux, and Windows.

Cross-platform application lifecycle management? Git and GitHub are supported, and Microsoft has extended the Git support in Team Foundation Server to allow for continuous integration with the same kinds of smart check-in rules that Team Foundation Server has for its own version control system.

Cross-platform builds? In addition to using Visual Studio Build and MSBuild, Team Foundation Build can use Ant, Gradle, Maven, Android Build, Gulp, Xcode, and others.

Cloud deployment? For Azure, at least, check.

Of course, Visual Studio still supports development for Windows, including all of the old technologies for Windows desktop apps and the new Universal Windows Platform apps.

-- Martin Heller

TOY 2016 Salesforce

Salesforce1 and Lightning

Salesforce developers at all skill levels can find good options for building mobile apps based on their Salesforce site. At the most basic level, you can configure compact layouts and both global and field-specific actions for the Salesforce1 mobile app from setup pages.

At a much more advanced level, Salesforce Mobile SDKs make it possible to access Salesforce data from native and hybrid apps, and Mobile Design Templates enable developers to create decent-looking mobile app pages. In between these options, Salesforce’s new Lightning App Builder, Components, and Design System allows you to visually create modern enterprise apps for desktops, tablets and mobile devices with ease.

The push from Salesforce is for developers to create a “Lightning Experience,” but they haven’t taken away any of the older technologies, so existing Salesforce apps will continue to work.

If you already run Salesforce in your company, using Lightning or one of the other mobile Salesforce options for no additional cost to expose your data to users on their devices is a no-brainer. On the other hand, if you don’t have Salesforce, the per-user pricing model is likely to make little financial sense.

-- Martin Heller

TOY 2016 Alpha Anywhere

Alpha Anywhere

Alpha Anywhere is a database-oriented, rapid app development tool that shines at creating Web and hybrid mobile apps that work offline. It allows developers to build good apps quickly, with surprisingly good performance and nativelike look and feel.

Alpha Anywhere’s SQL database support is especially strong because it allows you to use the native SQL dialects of each database if you wish, or use Alpha’s Portable SQL facility, which will emit the appropriate native SQL for the current database connection. Alpha’s support for offline mobile operation is also quite complete. It reduces developing data conflict resolution logic to a few clicks.

Recently, Alpha added mobile file system access in hybrid mobile apps for large amounts (gigabytes) of data, with compression. This has advantages both for viewing cached media offline, and for creating large numbers of photos, audio files, and video, even when you don’t have connectivity.

Mobile Optimized Forms are currently in beta test, and planned for 1Q16 release. Alpha is building this capability around a FormView, with specialized controls for things like ink annotations and audio capture (with pausing). The forms work now, but the builders are not yet up to Alpha’s usual bar. When the feature is done, much of the work in building a mobile forms app will be done with a Genie, Alpha’s equivalent to Microsoft’s Wizards.

-- Martin Heller

TOY 2016 Swagger


Swagger is a free, open source, language-agnostic interface to RESTful APIs, with tooling that gives you interactive documentation, client SDK generation, and discoverability. It’s one of several recent attempts to codify the description of RESTful APIs.

The tooling makes Swagger especially interesting. Swagger-UI automatically generates beautiful documentation and a live API sandbox from a Swagger-compliant API. The Swagger Code Generator project allows generation of client libraries automatically from a Swagger-compliant server.

Swagger Editor lets you edit Swagger API specifications in YAML inside your browser and preview documentations in real time. Valid Swagger JSON descriptions can then be generated and used with the full Swagger tooling.

The Swagger JS library is a fast way to enable a JavaScript client to communicate with a Swagger-enabled server. Additional clients exist for Clojure, Go, Java, .Net, Node.js, Perl, PHP, Python, Ruby, and Scala.

The (paid) Amazon API Gateway is a managed service for API management at scale. It can import Swagger specifications using an open source Swagger importer tool.

While the API management space is starting to become crowded, Swagger is our top choice for its useful tooling, excellent UI generation, extensive language support, and widespread adoption. The typical startup introducing its own REST API doesn’t agonize about how it will document the API and provide a test bed for developers: It simply generates a Swagger UI.

-- Martin Heller

Copyright © 2016 IDG Communications, Inc.