Code optimization is one of the fundamental steps in software development. All programmers want their code to run faster or take up less space (and sometimes both). Given the complexity of modern CPU architectures, however, when you get down to the processor level, effective optimization can be incredibly difficult, not to mention time-consuming.
That's why most programmers don't bother -- that is, they seldom perform such low-level optimizations by hand. Instead, they rely on optimizing compilers to go the last mile for them. Today's optimizing compilers output machine code so compact and efficient that only a master assembly language programmer could compete. Further optimization by hand simply isn't worth the effort.
But that implies a good optimizing compiler exists for a given architecture. Finding good compilers for mainstream desktop CPUs is easy, but that's not always the case for mobile devices. In the fast-paced world of embedded systems, processor architectures can change so quickly that compiler designers have a hard time keeping up.
The MilePost project, a consortium of researchers backed by IBM Research and the European Union, thinks it has the answer. It's developed a new, experimental version of the GCC compiler that uses artificial intelligence to improve the quality of its own output. The goal is to allow compiler developers to spend less time tweaking compilers for specific platforms by enabling the compilers to handle that part on their own. It's a wacky but fascinating idea -- and it might just be the future of compiler design.
A learned approach to optimization
As it stands today, MilePost is hardly HAL 9000. It won't write your code for you or even suggest more efficient algorithms. What it can do, however, is use machine-learning techniques to gather data about software performance and adjust the machine code it outputs accordingly.