The virtualization myth

There’s a lot more to it than a bunch of developers running Windows and Linux together

There are only a few markets ideally suited for virtualization. One of them is software development. As the scene is usually painted, the developer sits at his or her desk, compiles new software, and launches it in a virtual machine so that when it crashes, it doesn’t take the whole box down.

We hear of developers who keep a Linux instance open on Windows, or vice-versa, either for the sake of cross-platform productivity or to strike a blow for religious freedom.

But as long as virtualization is viewed as a tool of convenience for individual developers, IT is not likely to stay very excited about it. The larger reality is, once we get to the point where we can assume that virtualization is a standard component of every OS, major changes with bottom-line impact will take place miles away from any one developer’s desk.

As a former developer, I get excited about it. Now that I know I can do with virtualization what I always wished I could, I can’t imagine architecting, developing, or doing QA on SOA and other distributed solutions of scale without virtualization as a core OS component on every machine my solution might touch.

When a development lead hands a project to QA, technical writers, tech support, or anywhere in any direction in the development chain, the project should always be passed along as a virtual disk image that’s ready to roll.

That means the virtual disk image would have the OS configured with all of the application’s dependencies in place, the application itself, and the sample data and scripts required to test it thoroughly.

If I could do that back when I worked in development, I might have stuck with it longer.

If I were still working in IT, I’d declare that any software solution pitched to me could not get through my door as a stack of install discs, a quick start guide, and a “give me a call if you run into any trouble.”

Leave me with a DVD that has a VM image I can copy to my local drive and execute with the OS’s default virtualization engine. If it’s a client/server solution, give me two VM images to launch side by side; I can handle that. If it’s SOA, give me one image I can launch multiple times. I can handle that, too. That’s the kind of task I could hand off to a junior member of my technical staff.

I would have taken a lot more demos from vendors, and taken looks at a lot more intermediate builds if I could have just double-clicked on a virtual disk image with 100 percent certainty that the OS and application would just run.

The turnaround time between a problem report to developers, contractors, tech support, or even Web site designers, and the response could be cut to next to nothing for reproducible problems. You’d run the problematic application or site in a VM, drive it to the point where the error occurs, freeze the virtual machine, and ship its image and the file containing its running state to the responsible developer or support tech.

They will be just one click away from experiencing the problem exactly as you saw it, and they will be able to instant-replay it as many times as suits them, with no chit-chat about what you did to get to that point, and (dare we hope?) no argument about whether the problem can be reproduced on their end.

Join the discussion
Be the first to comment on this article. Our Commenting Policies