First look: Joyent Triton puts cloud computing on a bare-metal diet

Joyent’s Docker-driven container compute instances are superthin, superfast, and supercheap

It's been more than a year since Docker began taking over the mind of every devops team in the world with the seductive promise of a special mechanism for bundling software in a lightweight collection of only the essential files. The emphasis here is on "lightweight" and "essential" because this service is redefining how light "light" can be.

Now Joyent is one of the first companies to bring Docker to the cloud marketplace by offering a service called Triton, which lets users start up a Docker container instead of a VM. In the past, Joyent ran a heavier service that started up the Docker container inside of a separate virtual machine, but now the company has put together a mechanism that juggles the containers directly on the metal. Joyent suggests that the code will run faster (generally) and the overhead will be smaller (most of the time). It certainly shows how amazing Docker can be.

The advantages of Triton come largely (but not entirely) from the clever idea embedded in Docker itself. Instead of shipping an entire VM filled with every single last bit of the operating system and the software, Docker packages up only the new files you've added to or changed from the base distro. Then it tars these up in a so-called container and arranges for a bunch of these containers to run at the same time without bumping into one another. You can put your files in your container at /home/root and I can put my files in my container at /home/root and they won't interfere with each other.

It's easy to see how seductive this might be. While starting up servers in the cloud is much simpler than filling out the purchase orders to install one in your own back office, it's still a bit of work to configure. Docker promises to make it even easier to package up your code because you ship a relatively tiny container of only the essential files from your development machine. The cloud fills in the rest.

Tiny slices of fast compute

The great thing is that the containers simply don't need as many resources. Joyent is offering tiny machines with as little as 128MB of RAM -- an amount that sounds forbiddingly small, but is pretty good because you get almost all of that RAM for your own use. Usually when you spin up an instance in the cloud, you have to set aside a big chunk for the operating system. A VM with 128MB won't leave much for your code because most of it will be filled with Ubuntu or CentOS. But the RAM allocated to a container is mostly yours, aside from some housekeeping.

The fact that Joyent can slice the salami this thinly illustrates how Docker is going to revolutionize the cloud. The metaphor for the cloud suggests it's full of stand-alone “machines” or "instances" that we rent like hotel rooms. But in reality there may be hundreds of "instances" sharing a high-powered machine. A better metaphor might be hot racks on the submarine where each sailor is only allocated a bunk for eight hours a day.

The current cloud machines on VM-based services such as Amazon, Azure, and Google Cloud are extremely wasteful. Each instance has its own copy of Linux eating up RAM and disk space. It's possible that 90 percent of the cloud is filled with copies of Linux.

Once you recognize this, it’s obvious how Joyent can charge so little but deliver so much. Joyent’s 128MB machine costs only 0.0055 cents per minute. When your great-great-grandparents went to the store to buy penny candy, they couldn't spend this little. And notice how Joyent lets the meter run by the minute. Most cloud services charge by the hour, but Joyent says it wants people to spin up containers for a few minutes and shut them down. If you let the container go for an entire month, you could rack up a whopping bill for $2.38!

I wonder how many people will end up using these tiny slices. Right now Joyent pegs the ratio of CPU time to RAM, so you only get 1/16 of a CPU for that 0.0055 cents. If you want a full "CPU" -- the quotes indicate it's also a pretend share that approximates what a real CPU might deliver -- you have to pay roughly 16 times more and get 16 times more RAM. (Joyent’s pricing isn't exactly linear, but it’s close.)

How fast is Joyent’s container service? In my tests, screamingly fast. The default-size machine comes with 1GB of RAM and half a CPU. I was able to run the DaCapo Java benchmarks and produce extremely fast results. The Xalan test for converting XML to HTML ran in 4,855 milliseconds, which is three times faster than when I ran the benchmark on Joyent’s stock cloud machines several years ago. Then, Xalan took 14,456 milliseconds. The Tomcat test was about six times faster and the ray tracing benchmark was about seven times faster.

Joyent Triton Docker command line

Joyent's Triton container service currently lacks a Web UI. All of the excitement comes through the Docker command line. 

Bursts for free

This sounds amazing, of course, but it's a bit of a mirage. Joyent has not broken the laws of physics or sped up Moore’s Law. Part of the reason for this blistering speed is because the benchmark announces it has discovered 48 cores, so it’s going to use all of them. Woo hoo!

The burstiness will create confusion as we try to figure it out. If you paid for a CPU, at some point Triton is going to step in and make sure you get only what you pay for. But for short bursts, Joyent will let the containers run wild -- because why not? If the CPU cycles aren't used, they'll disappear like the sand in the hourglasses that measure the days of our lives. You might as well let the containers use them.

I suppose one reason is that people will be confused. They'll sign up, get amazing performance for a bit, then watch their containers get slower and slower. But the other option is so wasteful that Joyent is going to warn people and let it sort itself out. One of Joyent’s product managers, Casey Bisson, said, “This is an important problem for us to solve quickly, but it won’t be in place for the GA launch.” I think the only solution is to relax and enjoy the “free” bursts.

Joyent is clearly interested in delivering some of the sleepier services on the Web, the ones that answer a single request every so often. If these services need to burst for a few seconds and chew up 48 cores, Joyent is ready to oblige. The rest of the time, they can sit there sleeping. On average, they can consume no more than the fraction they purchased.

For grins, I also fired up a few 128MB, 1/16-CPU machines and ran the same tests. They weren't as fast as the default 1GB, 1/2-CPU instance, but they were better than 1/8 speed. The Lucene search benchmark in the DaCapo suite ran in 1,819 milliseconds on the 1/2-CPU machine but 4,077 milliseconds on the 1/16-CPU instance. In general, the numbers were about two to three times slower, but not eight. When the 128MB machine didn't run out of memory, it was a good deal at 0.0055 cents per minute. Both machines thought there were 48 cores thanks to the bursting.

It's worth noting that my benchmark results varied quite a bit between runs, and some failed to finish. The Eclipse test, for instance, ran out of heap space on the small machine. Your mileage will almost certainly be different, and it will probably vary much more than the same benchmarks run on that dedicated machine on your desk.

Joyent Triton 48 cores

If you run htop, you can see that your Triton instance is running on a 48-core machine. Sometimes you can burst out to use more than your share. 

SmartOS under the hood

But this may be part of the fun. I think that Joyent is imagining a new lightweight kind of architecture appearing. The containers start up dramatically quickly, in a few seconds. Traditional cloud instances can take minutes. It almost seems feasible to start up an entire container simply to answer some Web service, then shut it down when it’s done, because those 0.0055-cent charges add up.

While the amazing opportunity of Docker and Triton are obvious, it’s worth noting some caveats. Both Docker and Triton are quite new. What I used is merely a step on the way to “general availability.” While much of what I tried to do worked quickly and effectively, some things didn't. Even some simple commands like apt-get didn't work some of the time. In one case, the new package tried to set up Fuse, and it didn't work. Why? The solution for me was to stop the package manager from “installing all recommended packages too.” You’ll spend a bit of time fiddling with Unix details like this.

There are other compromises. Joyent doesn't offer much of a Web interface. You'll use the Docker command line to start up and shut down your containers. Joyent promises a Web interface for managing containers will appear by the time the service reaches GA. Docker is still very much a work in progress.

Joyent’s operation is also evolving quickly. It's important to understand that Joyent is trying something quite clever. When you start up a “Linux” container, you’re not actually running code that comes from the Linux team. You're actually running on a kernel that descends from the old Solaris world. Joyent calls it SmartOS.

Joyent’s Bisson was very upfront about this. He said Joyent sells something that delivers the Linux API that is good enough to be called Linux in some people’s minds. It’s not the code base, but the API that defines it. The clever folks at Joyent are able to deliver their great performance by finding a way to get the solid Solaris bones to support the Linux API.

I hesitate to guess how often this will work for you. Certainly, it worked most of the time for me. But some of the code I tried to run simply failed. A SOAP simulation, for instance, barfed because it couldn't find a USB connection. If I played with it, I’m sure I could work around it. In a few months, a new version of Docker or Triton could patch it. They're moving very quickly.

Questions and opportunities

There are deeper issues about security. While Joyent has obviously spent a great deal of time worrying about information leaking between containers, I think they’re working at the frontier, and it may be some time before our paranoia ebbs. For instance, when your code allocates a block of RAM, this is passed down through the thin Linux emulation layer to the Solaris kernel, which provides that RAM. When you free up the memory, it’s given back to the kernel that will probably hand it off to another container right away. Do you have any passwords or keys in that RAM? I hope not.

All of this could raise as many questions as opportunities because at some point the laws of physics will weigh us down. It’s hard to know exactly what you’ll get for your 0.0055 cents, and we probably won't have a clue until the service has been running for a bit. The current billing model may not be the best one, and Joyent may switch to a more metered approach in the future. Certainly the ability to burst and take over 48 cores is a great feature for some applications, so it’s kind of silly to ask for perfect precision. The freedom is more fun.

There will continue to be glitches caused by the moments when Docker and Triton aren't perfect simulacra of your desktop machine and your desktop installation of Linux, no matter how hard they try. But I think it would be a mistake to stomp our feet and hold our breath simply because apt-get (or similar) don't work perfectly right out of the box. I think we'll need to spend a bit of time under the hood, fiddling and jiggering the code, and the reward will be a great deal of flexibility. It only took a few minutes for me to get most of my code running most of the time.

For all of these hassles, the simplicity and low overhead of Docker are so amazing that we will all be shipping around containers in no time. Triton alone is pretty cool, even if you don't rewrite your code to an architecture built around containers that turn on and off like stoplights. If you retool your application to take advantage of the short startup times and minute-by-minute billing, well, it could be incredible.

Copyright © 2015 IDG Communications, Inc.