Intel tools harness the true promise of multicore chips

Intel Parallel Building Blocks make it easier for C and C++ developers to take advantage of multicore processors

The gigahertz wars are long over. Now and for the foreseeable future, CPU performance gains will come not from increasing clock speeds but from packing ever more processor cores onto chip dies. Today's fast six-core processors are nothing compared to the megamulticore designs chipmakers have in the pipeline, and dual-core chips have even begun making their way into phones and other mobile devices.

But there's a problem. Multicore chips achieve performance gains through parallelism, but parallelism in software doesn't come for free. Before an application can take advantage of today's multiprocessor architectures, developers must build support for parallel task and data management at the lowest levels. Unfortunately, many of today's developers learned their craft at a time when this level of multiprocessing was limited to the rarified world of supercomputing. They simply lack the skills necessary to build reliable and effective parallel software.

[ Also on InfoWorld: Following recent upgrades, Nvidia's toolkits allow users to massively leverage the parallel processing capabilities of GPUs. | Keep up with app dev issues and trends with InfoWorld's Fatal Exception blog and Developer World newsletter. ]

Little wonder, then, that Intel has made tools and support for parallel software development such a priority in recent years. At the Intel Developer Forum 2010 conference, which took place this week in San Francisco, the chipmaker unveiled not one but three new technologies aimed at allowing developers to take better advantage of multiprocessing on Intel architecture. Collectively, the three tools are known as Intel Parallel Building Blocks, and they're available today as part of Intel Parallel Studio 2011, which shipped earlier this month. But will Intel's efforts really be a boon to developers, or will they serve mainly to widen the gap between Intel and its rivals, including AMD, Via, and ARM?

Three paths to parallelism
Intel isn't alone in tackling parallel software development, but different companies have approached the problem in different ways. Some, including Google and Sun Microsystems, have taken the route of building entire new programming languages around parallelism. This allows developers to enter this new world with a clean slate, but it also increases their learning curve. All of Intel's tools, on the other hand, work with plain old C and C++.

Well, almost. The first of Intel's three new tools, Cilk Plus (pronounced "silk"), makes it easier to add fundamental parallel features to programs by introducing new keywords to the C++ language itself. For example, the new "cilk.for" keyword generates loops that are automatically parallelized and managed by a task scheduler, while another new notation creates arrays that are better targets for SIMD instruction sets (the MMX and SSE technologies on Intel processors). Because the syntax of the new keywords is essentially the same as traditional C++ syntax, developers can easily bring parallelism to their programs and remove it again simply by swapping out a few keywords -- making debugging much less painful. All of the low-level code required to enable parallelism is generated by the compiler.

The second tool, known as Threading Building Blocks (TBB), takes a more traditional approach. Its syntax will still feel familiar to C++ programmers, but rather than adding keywords to the language, it provides parallel capabilities in the form of a C++ template library. To enable parallelism, developers need only swap out Standard Template Library (STL) data types with the corresponding types from TBB. TBB also includes a task manager and a scalable memory allocator that was designed to support parallelism.

1 2 Page
Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies