Measuring engineering velocity misses all the value

Our preoccupation with software development speed has made velocity the predominant metric of success for engineering teams—to the detriment of people, process, and product value.

data structure / data explosion / volume / velocity
Gremlin / Getty Images

In the world of continuous delivery, frequent dev releases have become the status quo, with most companies racing to make daily updates to their products. With the support of cloud and advanced automation technologies, our CI/CD pipelines are roaring (as are security and quality concerns). But as speed has become the focus, velocity has become the predominant metric of success for engineering teams—to the detriment of people, process, and product value.

What do we miss when engineering velocity is the only metric? What’s going on outside this narrow field of view? And what do these myopic metrics miss?

Software development in the dark

Story point velocity has become the dominant driver of agile software development lifecycles (SDLCs) with the rise of scrum.

How many story points did the team complete this week? How can we get them to deliver more points while still meeting the acceptance criteria?

Speed is treated as synonymous with success, and acceleration is hailed as the primary focus of any successful engineering enterprise. Deliver more story points and you’re clearly “doing the thing.”

The impulse is not without some logic. From the C-suite perspective, a perfect product that misses its moment on the market isn’t worth much. Sure, it may be full of engineering genius, but if it generates little to no business value, it quickly becomes more “museum relic” than “industry game-changer.” It pays to be first. In fact, one study found that accelerating time to market by just 5% could increase ROI by almost 13%.

However, I believe that a simplistic obsession with speed misses several factors critical to optimizing the actual impact of any software solution.

Consider all of the questions we’re not asking when we focus so narrowly on velocity:

  • Are the product specifications well formed and aligned with how the team wants to deliver the product?
  • Are we building the right solution to the targeted business problem?
  • Can the code be crafted with quality to avoid painful rewrites down the road?
  • Better yet, can the code be test driven in the first place so that refactoring is trivial and requirements are documented?
  • How much technical friction will be carried forward to the next iteration?
  • Will the system be secure, stable, and scalable? Readily extensible too?

Obviously, we’re all shooting for speed and quality, but the current fascination with velocity metrics across engineering teams only encourages bad habits on all sides.

And all too often, we don’t hold every phase of the development cycle accountable to the same metric-based delivery assessments. Product teams are generally not evaluated based on their velocity, nor are the different parts of the product pipeline held mutually accountable for engineering delays.

What do our metrics miss?

To be fair, it’s difficult to measure the velocity of product specifications. There’s an implicit understanding that product development is a process, or even an art, one that requires experimentation and iteration. I suggest that this is also true of software engineering.

Reducing the measure of a product team to the pace at which it outputs tickets to the engineering team would be manifestly absurd.

In the midst of their iterations, however, the product team at some point starts creating tickets—without any real metric of whether those tickets are well formed and can be worked by a team properly. Also without any assessment of whether those tasks can realistically be completed within the available time. And without any requirement that those tasks include backup plans to adapt to unexpected black swans, difficulties, and delays.

Engineers work those tickets and create and deploy code. And it’s the speed at which they deliver those deployments that gets measured to evaluate productivity.

Within this cycle, the engineering team may inherit some tickets from the product team that innocently overlooked a potential conflict or neglected an important requirement. But by that time, engineers are working against a launch date or projected timetable—and getting evaluated based on their speed. Since velocity is the KPI, time spent troubleshooting tickets ultimately impedes the team’s overall performance.

For example, let’s say you receive a ticket whose requirements all stem from a misunderstanding on the product side, something only an engineer would be likely to see—and you spend 20 minutes teasing out that fact. That’s precious time that you could have spent writing a test in a function or developing something else that moves the needle forward. And sometimes, you may not realize the ticket was incorrect until you’ve already spent substantial time coding.

Now, in a vacuum, maybe 20 minutes or half an afternoon isn’t such a catastrophe for the timeline. But as the problem repeats over multiple iterations, suddenly you’re struggling to make up the lost time, generating more and more “product debt” along the way.

Dates drive bad decisions

Imperfect tickets paired with velocity-based KPIs encourage bad habits on the engineering side of the equation. When the deadline is the only marker that matters, and engineers have no choice but to sort out these tickets before proceeding, development corners end up being cut in the race to the finish line.

Coding gets sloppy, defects swell, and technical debt threatens to drown the undertaking. In the end, the product’s rushed configuration only makes it more difficult to add new features in the future, reinvigorating the vicious cycle for the next go around.

But hey, it was on time.

An environment in which software engineers feel constantly pressed for time, unable to do their best work and on the hook for a measurement that’s not fully within their control, is also a recipe for burnout and rapid turnover. For the sake of our development teams, and as an industry, we simply can’t afford to keep losing our best talent, now or in the future.

According to the U.S. Bureau of Labor Statistics, the demand for software developers will grow by 25% from 2021 to 2031. This is not a time to test the resolve of some of your most valuable employees, or try to find out just how far you can push them—especially when the end product also suffers as a result.

The chicken or the pig?

In any agile software development life cycle (SDLC), problem-solving is a key, embedded part of the process, and we know the output of development isn’t ever going to be perfect (hence the subsequent testing phase). But without any KPIs related to ticket quality, the product team has little incentive to rethink their process, so the system continues to generate unnecessary waste—waste that can leave the engineering team scrambling against the clock.

The scrum fable of the chicken and the pig best illustrates the role discrepancy. In the process of creating breakfast, the chicken is involved, but the pig is committed. Product teams contribute requirements: good, bad, or usually somewhere in between. Imperfection is inevitable and the need for iteration is guaranteed. But it’s the engineering teams that often end up in the frying pan when the time line diverges from the plan... even if the end result is a more valuable solution.

Why is speed so often regarded as the one truth of software development when so often it hinders critically important moments of collaboration and quality assurance? Rather than mechanically producing, receiving, and executing tickets, product and engineering teams need to reach across the aisle and embrace the give-and-take necessary to develop truly effective solutions, including ample contingencies for when things don’t go according to ticketed plan.

Keep it blameless, keep it moving

So, am I proposing harsher sentences for faulty tickets? Not at all. Product teams should be granted grace and flexibility as they craft more valuable solutions, but engineering teams could use some, too.

Software development is a process, even an art, carried out by and for humans. Insisting on a machine-like cadence prevents the type of creative thinking and iteration necessary to create the best solution possible. Progress often doesn’t follow a linear, predictable path. Approaches don’t work out, and you have to go back and try a different idea. Sometimes you find a better way that creates more value, even if it takes a little longer. Some breathing room would be nice.

But perhaps more importantly, product and engineering teams need to come together in closer collaboration both during the development process and in retrospectives to support positive outcomes and continuous improvement. More communication up front could save a lot of time in the long run as both teams have a chance to iron out the transition from functional requirements to coded reality.

And let’s not forget the core tenets of the “blameless” post mortem. Without shared accountability, organizations will not be able to develop a nuanced understanding of what went wrong during any given sprint. In a blameless culture, regardless of whose “fault” it was, both teams would work together to understand why things went wrong in order to grasp the full complexity of what happened and understand how to correct for it in the future. If engineering teams need to find ways to work more efficiently, fine, but product teams probably also need to iterate toward continuous improvement of their tickets.

Everybody get together

Metrics like speed aren’t a great measure of success for development teams. In fact, they fundamentally mischaracterize how a product should be built: thoughtfully and collaboratively across both product and software development functions. Speed isn’t the be-all and end-all. A true measure of success is a combination of the business value—as defined by product owners in collaboration with the engineering team—and the engineering team’s delivery of that value.

It’s no secret that we would be nowhere without daring product teams that question assumptions and test the limits of what’s possible—along with engineering teams who embrace that type of challenge. We need both of these groups at their best, taking risks and testing the limits of their abilities. Only together can product and engineering teams create the high-quality, groundbreaking software that raises the stakes, defies expectations, and speeds past the competition.

Copyright © 2023 IDG Communications, Inc.