Mozilla tackles parallel JavaScript

Mozilla is experimenting with its SpiderMonkey JavaScript engine to maximize hardware power

arrows left parallel

Mozilla is pursuing parallelism for JavaScript in an attempt to fully leverage available hardware capabilities.

In a blog post last Thursday, Mozilla Research's Dave Herman, principal researcher and director of strategy, noted experiments the organization has been conducting with its SpiderMonkey JavaScript engine, taking a low-level approach to extending Java with primitives for parallelism. "Between the coming release of ES6 and unrelenting competition for JIT performance, these are exciting times for JavaScript," Herman said. "But an area where JavaScript still lags is parallelism -- exploiting hardware acceleration by running multiple computations simultaneously."

The intention of parallelism, Herman stressed, is "unlocking the power lurking inside our devices: GPUs, SIMD instructions, and multiple processor cores." He added, "With the emerging WebGL 2.0 and SIMD standards, the Web is making significant progress on the first two. And Web Workers go some part of the way toward enabling multicore parallelism." Web Workers provide a way for Web content to run scripts in background threads, but they are isolated, Herman said.

To achieve the goal of parallelism, Mozilla is experimenting with a SharedArrayBuffer API in SpiderMonkey. The company is drafting a specification featuring the API, with a prototype implementation featured in Firefox Nightly builds, said Herman, who also noted that Mozilla is looking for users to provide feedback.

A SharedArrayBuffer type with built-ins for locking introduces new forms of blocking to workers, plus the possibility that some objects could be subject to data races, Herman explained. "But unlike [the Nashorn project in Java], this is only true for objects that opt in to using shared memory as a backing store -- if you create an object without using a shared buffer, you know for sure that it can never race. And workers do not automatically share memory; they have to coordinate up front to share an array buffer," said Herman.

He emphasized that Mozilla's parallelism efforts did not pertain to concurrency, which is about programs that respond to simultaneous events. "JavaScript's asynchronous concurrency model is popular and successful, and with promises, ES6 generators, and the upcoming async/await syntax, it's getting better all the time."

One option to achieve parallelism would be to leverage the Project Nashorn model and turn JavaScript into a multithreaded data model, said Herman. But he noted a weakness with this approach. "Unless your host Java program is careful to synchronize your scripts, your JavaScript apps lose all the guarantees of run-to-completion. Frankly, I can't imagine considering such a step right now."

Mozilla and Intel Labs, meanwhile, have done work with deterministic parallelism APIs. "The goal of these experiments was to find high-level abstractions that could enable parallel speedups without any of the pitfalls of threads," said Herman. "This is a difficult approach, because it's hard to find high-level models that are general enough to suit a wide variety of parallel programs."

One analyst sees an opportunity for parallelism in JavaScript. 

"The lack of parallelism is an area ripe for evolution," said analyst Michael Azoff, of Ovum. "[Herman's approach]  is to make early ventures in parallel JavaScript available to developers and to see what the feedback is," Azoff said. "There are a number of ways in which to implement the parallelism, and the degree to which data is shared reflects on how predictable the machine state can be -- parallel programming has notorious pitfalls like race conditions. So I like the cautious approach Herman takes."

Related:

Copyright © 2015 IDG Communications, Inc.

How to choose a low-code development platform