Code optimization is one of the fundamental steps in software development. All programmers want their code to run faster or take up less space (and sometimes both). Given the complexity of modern CPU architectures, however, when you get down to the processor level, effective optimization can be incredibly difficult, not to mention time-consuming.
That's why most programmers don't bother -- that is, they seldom perform such low-level optimizations by hand. Instead, they rely on optimizing compilers to go the last mile for them. Today's optimizing compilers output machine code so compact and efficient that only a master assembly language programmer could compete. Further optimization by hand simply isn't worth the effort.
But that implies a good optimizing compiler exists for a given architecture. Finding good compilers for mainstream desktop CPUs is easy, but that's not always the case for mobile devices. In the fast-paced world of embedded systems, processor architectures can change so quickly that compiler designers have a hard time keeping up.
The MilePost project, a consortium of researchers backed by IBM Research and the European Union, thinks it has the answer. It's developed a new, experimental version of the GCC compiler that uses artificial intelligence to improve the quality of its own output. The goal is to allow compiler developers to spend less time tweaking compilers for specific platforms by enabling the compilers to handle that part on their own. It's a wacky but fascinating idea -- and it might just be the future of compiler design.
A learned approach to optimization
As it stands today, MilePost is hardly HAL 9000. It won't write your code for you or even suggest more efficient algorithms. What it can do, however, is use machine-learning techniques to gather data about software performance and adjust the machine code it outputs accordingly.
It's a complex process -- and there are plenty of research papers available if you want to delve deeper -- but in a nutshell, it works by analyzing the source code input to find specific "features" that might be good candidates for optimization. In this context, features might include such traits as the number of subroutines in the code that take a lot of parameters, whether there are a lot of nested loops, or which types of math the program uses most often. Once MilePost has built a catalog of all the features present in a given program, it can use statistical techniques to decide which optimizations will yield the best results and readjust its own modular design as appropriate.
This isn't so different from what human compiler developers have been doing for years, but for humans the process is much more arbitrary. Often the best they can do is to guess which optimizations will be desirable the most often. MilePost has the advantage of basing its decisions on statistical data gathered from real-world use cases in specific environments. Initial tests by IBM have shown that MilePost can improve performance by as much as 18 percent versus the code output by traditional compilers.
In addition, the MilePost project has since spawned the Collective Tuning Initiative, a Web-based collaboration effort with a goal of accumulating still more information that can be applied to improve self-modifying compilers, such as MilePost.
It's about more than just speed
It's fair to ask how important this kind of low-level optimization really is, when processors seem to be gaining speed by leaps and bounds each year. Most users probably don't use half the processor power of their PCs already. But while that may be true on the desktop, it's a different story in the world of mobile devices. When targeting devices with low-powered processors and limited resources, code optimization isn't an option -- it's essential.
Developers of software for handhelds often target several platforms at once. Achieving equivalent performance on each platform can require extensive optimization, and if no mature optimizing compilers exist, that optimization has to be done by hand. This process might take months, adding considerable overhead to the development budget and delaying time to market. But a machine-learning compiler that adapts to each platform individually could eliminate much of this drudgework, slashing development costs in the process.
In the long term, machine learning technology could benefit compilers in other ways, too. One possible application is to help optimize software for modern, multicore processors. Parallelization remains one of the more challenging aspects of programming, with many potential pitfalls. A compiler that uses machine learning to identify ways to parallelize code automatically could be a tremendous boon to the software industry as a whole, as chipmakers increasingly turn toward multicore designs.
For now, MilePost remains very much an experimental project. You can download the current working version in source code form, build it, and start experimenting yourself. But don't expect to deploy it for your next software project just yet. Nonetheless, MilePost represents exciting research that could pave the way for the next major evolution in compiler technology. If you care about software efficiency, and particularly if you develop software for handheld devices or other embedded systems, it's one to keep your eye on.