Americans have a bad, though deserved, reputation for only speaking one language. Small surprise, then, that the same is often true for American programmers. Today's computer science graduate often leaves school with a strong knowledge of only one programming language -- typically a major systems language, such as Java or C++ -- and goes on to a career based almost exclusively on that language.
On the surface, this makes sense. C++ and Java are both highly versatile, complex tools. Just learning the syntax of either one is nothing compared to the amount of study it takes to become familiar with the whole ecosystem of associated libraries and frameworks. Not to mention that both languages are widely used; if you don't know either one, you cut your chances of getting a coding job dramatically.
[ The now-ubiquitous multicore chips pose new challenges to developers using traditional languages. ]
But a software development field that's based almost exclusively on two languages -- languages that are very similar, for that matter -- is also in danger of stagnating. The Sapir-Whorf hypothesis holds that the patterns of human thought are profoundly influenced by the patterns of the language in which the thought is expressed. Linguists disagree as to how strictly this is true of human languages, but for computer programming languages -- which are themselves but restricted subsets of human language -- it seems particularly apt. And yet, while the study of software development has marched forward with concepts such as functional and aspect-oriented programming, mainstream languages have remained tied to the same object-oriented paradigm introduced decades ago.
Outside the mainstream, however, the field is exploding. New programming languages are introduced every year, and many of them could make valuable contributions to real-world software projects -- if only they would get used. What will it take for enterprise software developers to start thinking outside the twin boxes of Java and C++?
A cornucopia of new languages
What's good for computer scientists isn't always good for working programmers. But surprisingly, perhaps, not all of the work in the area of programming languages has been strictly academic. Microsoft's .Net platform, with its CLI (Common Language Infrastructure), has been a particularly prolific source of new languages. Wikipedia currently lists no fewer than 55 languages that run on the platform, all of them fully interoperable.
One of the more interesting new additions comes from Microsoft itself. Axum is a language designed to make it easier to write programs that work well on today's multicore, multiprocessing hardware. You may recall that last year I wrote about Fortress, a language with similar aims from Sun Microsystems. What makes Axum interesting, however, is that instead of trying to duplicate all the features of systems programming languages such as Java or C++, it focuses exclusively on parallelism. You can't even define an object in it; you do that in some other CLI language, such as C#. All Axum does is simplify the job of making applications multiprocessing-friendly -- a task that's often grueling in traditional languages.
Following the .Net platform's lead, the JVM is opening up as well. A variety of languages are now available that compile to Java bytecode, and some of them are pretty interesting. One that has gained a cult following is Groovy, which offers a Java-like syntax but is actually a dynamic language, similar to Perl, Python, and Ruby. It gives developers the safety and stability of the Java runtime but frees them from the often-restrictive Java syntax.
Other languages diverge from existing platforms, but have gained limited acceptance in commercial applications. Lua, for example, is a lightweight, embeddable scripting language that has found a number of niche uses, including video game development. It's in World of Warcraft, among others.
Success outside the mainstream
For the most part, these offbeat languages can be found only in niche uses, small projects, and research. But not every enterprise is so staid that it can't look past Java and C++. For example, microblogging pioneer Twitter recently announced plans to scrap its current architecture in favor of an all-new design based on an obscure language called Scala -- which, like Groovy, runs on the JVM. Says Twitter engineer Alex Payne, "We know that people write super-high-performance code in C++ ... but we wanted to be using a language that we're really passionate about, and it seemed worth taking a gamble on Scala."
Mind you, Twitter may not be the best example. Its original architecture was written in Ruby, at a time when few development houses were willing to gamble on that platform for mission-critical projects. And Twitter's history of repeated outages might lead you to believe that anything would be a worthwhile replacement.
Still, Twitter's example is worth considering. Until venture capitalists, investors, and executive management are willing to accept the guidance of software developers in choosing languages for major projects, languages like Scala are liable to be relegated to the use of a few "passionate" programmers at quirky Internet startups like Twitter -- no matter how much potential they have.
Before development managers can choose from a variety of different languages, however, they need to have coders on their team who understand more than one language. That's why I'd like to see universities graduate more CS students as "polyglot programmers," rather than language specialists. If nothing else, I worry that students who spent most of their education learning the ins and outs of a specific language syntax may be missing the bigger picture. Human language skills will always be the most important tool in a good software developer's belt, but a foundation in sound software design is what helps a good developer become great, no matter what the language.