The case for monitoring with VM

Compiled code no longer holds a clear advantage for developers

In the days when processor speeds were struggling upward through the tens of megahertz, interpreted and intermediate-code implementations were rarely considered candidates for large-scale applications. Interpreted applications noticeably lagged behind compiled counterparts in execution speed.

Nowadays, processors run at gigahertz speeds, and memory and disk are likewise measured in gigas. Interpreters and VMs (virtual machines) don’t look so shabby anymore. Java was the first widely successful VM implementation, and Microsoft obviously saw the worth of the virtual machine; the .Net CLR (Common Language Runtime) borrows much from Java. Granted, an application running in a VM still executes more slowly than its compiled equivalent, but that VM application enjoys advantages over compiled code. Those advantages translate to easier porting, easier debugging, and faster turnaround.

A VM creates not simply a virtual processor, but a complete run-time environment; a processor-and-OS-within-a-processor-and-OS. An application running in a VM interacts only with this manufactured environment. This arrangement allows VM applications to be moved easily from one system to another. You don’t port the application, you port the VM.

Building a debugger or profiler is easy if the VM provides appropriate APIs built right in, as do the Java VM and the .Net CLR. For Java, these APIs are the JDI (Java Debugging Interface) and JVMPI (JVM Profiling Interface). The .Net CLR publishes debugging and profiling services, both as sets of COM (Component Object Model) objects. .Net even allows a profiler to “quietly” instrument MSIL (Microsoft Intermediate Language) code before that code passes to the JIT (just-in-time compiler) but after it has been compiled from the source. Consequently, profiling tools can be much less intrusive because they need not explicitly assert themselves at source compile time. Note that Java’s JVMPI will be replaced in upcoming versions with the JVMTI (JVM Tool Interface) that will reportedly handle large applications better than JVMPI.

Because debugging and profiling “hooks” are built into the VM, tools don’t have to know how the VM’s internals work, nor what the internal representation of data structures are. Profilers and debuggers simply pose questions to the VM along the lines of “Please tell me the next time a method is called” or “Please pass me a copy of that data structure,” and the VM does the true dirty work. Tools need only gather, summarize, and display the results.

And don’t forget interpreted environments. In one sense, it’s even easier to build customized debugging and profiling tools for interpreters. In many cases, you can build these from within the interpreter itself, largely because the interpretive engine is available at run time, as is the target source code, as well as variables’ symbolic names and contents. The interpreter knows its own internal representation of data structures, so the debugger writer can call on routines already in the interpreter to display complex data structures. Perl, for example, provides its debugger as a loadable library.

Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Join the discussion
Be the first to comment on this article. Our Commenting Policies