How IBM started grading its developers' productivity

Tooling and process can only go so far to assure code quality. IBM is analyzing application developers based on the volume and quality of the work they do

IBM has plenty of application development tools at its disposal, including those from its own Rational software line. But tooling and process can only go so far to assure code quality.

"At the end of the day, people are in the middle" of application development, says Pat Howard, vice president and cloud leader in IBM's global business services division. "It's really important to have great investments, great energy focused around the talent."

[ Get software development news and insights from InfoWorld's Developer World newsletter. ]

Howard led application development at IBM, where he was responsible for delivering applications across all of Big Blue's brands and overseeing its global development teams. On the talent front, he helped implement a system for analyzing individual application developers based on the volume and quality of their work.

FAVE RAVES: 22 IT pros name their favorite products

The core of the system is a commercial software product from Cast. The French vendor's automated software analysis and measurement platform provides metrics around the structural quality of application code and the performance of development teams.

Using the Cast platform, IBM managers can analyze how a developer pieces code together, for instance, and make a quantitative determination of the developer's abilities. The system can review the code's performance, security, and technical depth -- gauging, for instance, whether the code going to have a lower cost of maintenance in the long term.

"If you're writing something in Java, is the code itself structured in a manner that is compliant with what is recognized as an industry best practice? That's the type of science that Cast helps produce," Howard says.

It's a quantitative analysis, rather than a subjective observation. [See also: "5 requirements for measuring application quality"]

In the past, it has been a challenge for IBM to manage application development talent in a way that's reliable and predictable, particularly since teams are scattered around the world, Howard says.

Plus, the demand on development teams is constant. Anytime IBM makes a change to the business -- an update to its supply chain, adding new salesforce capabilities, or preparing a new product release -- invariably there are application changes required.

Taking the time to identify IBM's top software developers wasn't easy. Nor was it easy to figure out if all of its developers were fully utilized. Plus, utilization alone isn't an effective way of measuring the contribution of an individual. Is a developer producing quality work?

The Cast technology answers those questions and more while bringing some science to its measurements, and it also plays into IBM's plans to motivate its developers.

"When you think about a software developer, and you think about that talent, what are they interested in doing? A lot of them want to write software. That's why they went into the profession. But they also want to be known as the best software developer on the planet Earth," Howard says.

By defining sets of outcomes, or measures, that everyone in IBM's software development community (and its HR reps) could agree on, the Cast system makes it possible to quantify performance. "Essentially it permitted our people to walk around with a scorecard. They could begin to earn points, based on the results or the value they were driving for the business," Howard says.

With IBM's new system, reputation becomes something tangible. "Somebody can enhance their reputation within the community based on results that they're delivering."

The program also helps to identify performance shortfalls and skills deficiencies. "We use it to identify where more training is needed," Howard says. Training budgets are tight, so "when you spend it, you've got to spend it really smartly, aim it at the right place."

It also enables developers to get on-the-spot insight into the quality of the work they're producing. With that kind of feedback, developers can make mid-course corrections that are necessary to succeed, Howard says.

So how do IBM's developers feel about the grading system?

Reactions are mixed, Howard says. Some developers embrace it aggressively -- especially those who tend to be data-driven, he says. Some are proactive about using the system to get feedback on their own work. Other times managers bring the data to a team member's attention.

"This is never intended to be a penalty conversation," Howard points out. "We're in a continuous learning environment, and if everybody feels safe around that point, it can be better integrated."

Overall, the system has proven to be very valuable, Howard says. "It has really wrapped our worldwide community together in a way that we didn't anticipate."

Read more about infrastructure management in Network World's Infrastructure Management section.

This story, "How IBM started grading its developers' productivity" was originally published by NetworkWorld .

Join the discussion
Be the first to comment on this article. Our Commenting Policies