Writing great software is not that hard. But software developers can be their own worst enemy in trying to code the good stuff because they lapse into sloppy or wrongheaded practices.
Actually, scratch that: The developer's worst enemy is really the eager technical manager who tries to deliver a project faster than possible and pushes developers to engage into ill-advised practices. In high-end enterprise and Web-scale projects in particular, that can result in wholesale disaster.
[ Go beyond the smartphone and find out the emerging platforms developers should target next in the InfoWorld slideshow. | Learn how to work smarter, not harder with InfoWorld's roundup of all the tips and trends programmers need to know in the Developers' Survival Guide. Download the PDF today! | Keep up with the latest developer news with InfoWorld's Developer World newsletter. ]
These pitfalls are well known. Few developers would argue with them -- or at least with most of them. Those who dare to disagree may do so in the comments section at the end of this article.
1. Go even a day without writing a unit test
Developers love to get mired in details like the difference between a unit test and a functional test. The answer: Who cares? What matters is that you have good coverage and can tell when something breaks. It matters that you have a good starting point to run code and set a breakpoint in the debugger. The only way this can work is "as you go."
Tests are also good expressions of your requirements. Despite a mountain of evidence, I still occasionally hear: "You need to prove to management that unit tests are worth the time."
These managers are the tech industry's equivalent of climate change deniers. No amount of evidence will ever meet their burden of proof. They're doomed to deliver very late, buggy projects that fail to meet user expectations.
2. Go even a day without a build
With tools like Jenkins CI, there is no excuse. It takes only a few hours and a VM to set up an instance of Jenkins suitable for most projects. It can even be configured to run when code is checked in to a revision control system such as SVN or Git. The unit tests can run, metrics can be gathered, and emails can be sent when something breaks. Your repeated build is the heartbeat of a healthy project. You can't live without a heartbeat.
3. Use ClearCase (or any slow or pessimistic locking revision control system)
ClearCase is slow -- period. And ClearCase isn't the only horribly slow revision control system. Anything that causes developers to "wait" to check out or check in their code is a massive drain on developer productivity. It also causes aggregated risk. Even with a repeated build, if developers wait until they have time to check in, the build is nearly useless.
Worse, this means a given machine may have the only copy of a developer's work for a longer period of time. Over a lengthy enough timeline, the survival rate for any piece of hardware approaches zero.
A pessimistic locking revision control system isn't just a disaster in the sense of "oops, forgot to check in and went on vacation." It's a continual drain on the project. I find it incredible that there are still people who believe it's useful to have half the team waiting for a file to unlock than potentially having two people working on the same file (and probably different parts that will merge automatically anyhow).
4. Deliver to production without a branch
A vast number of organizations have not yet figured out how to create a branch. Branching is the magic bullet that allows you to deliver a release, fix bugs in that release, but not release any half-developed new code to production. Branching is not actually difficult. There are several effective strategies for it. In fact, every revision control system I've encountered in the past few years supports it. However, branching requires that developers familiarize themselves with their revision control system.
5. Wait until the end to load-test
Even some of the most effective organizations I've seen who've embraced test-first, pair programming, and all of that still think load testing is something to do at the end of the project.
Justification is provided in the axiom "early optimization is the root of all evil." There is some truth to that -- for a microcosm. But you need to know if you are making fundamental decisions that will not allow your project to meet your performance or scalability requirements. The cheapest time to catch those decisions is early in the project.
We're not talking iterators vs. loops or monads. We're talking the wrong data store, wrong algorithm, wrong rules engine, and horrible concurrency issues. Those issues can incur a huge amount of rewriting if caught too late.
6. Develop without capacity/performance requirements
The first question I ask when helping people with performance or scalability problems: How many users does the business expect? Regardless of the technical roots of the problem, the shrug I often get in response to this question is the real cause. A successful project has at least a vague estimate.
This isn't just good software; this is basic business forecasting. To develop a reasonable load test, you need performance expectations. You need to know how many users the system should be expected to handle.
7. Wait until the end to engage users
Marketing professionals have used focus groups for decades. Someone has to validate that the product development group has hit the mark and someone is going to buy it. The same goes for software development. Whether it is an internal or an external customer, someone needs to make sure the end product will pass muster with users as early as possible.
It can be embarrassing and troublesome to show your software in a "rough state," but if you don't, whether or not you meet user expectations will be left to chance.
8. Try to buy your way out of software development
The buy-vs.-build question is one of the most basic conundrums in IT. Obviously, commercial apps make more sense than internal app dev in some cases, as do commercial or open source programs that maybe woven into some larger project. But it's also possible to license, say, the entire Oracle or WebSphere stack and deliver absolutely nothing. There's a limit to how much stuff your development team can actually absorb and use before the complexity of the stack outweighs any supposed technical benefits.
9. Write your own cache, database, thread pool, connection pool, transaction manager ...
Unless you work for a company or an open source project dedicated to developing one of these, there's almost never a reason to write one, even if you "know what you are doing." Don't code what you don't need to code when reliable solutions that work have been QA'd by the multitudes. At least 99 percent of the time, that validation will outweigh your reasons for "writing a better one."
10. Code directly to the RDBMS by default
A considerable amount of nonsense is being written about Object-Relational-Mapping systems these days. Actually, there's always been a considerable amount of nonsense written about Object-relational mapping systems. Typically one or two edge cases are used to justify abandoning the ORM and writing "directly" to JDBC or OleDB or whatever. The truth is you can't afford to debug the extraneous CRUD code. Every ORM system I've ever used allows you a way to handle those one or two edge cases directly without full abandonment.
This article, "10 practices of highly ineffective software developers," was originally published at InfoWorld.com. Follow the latest developments in business technology news and get a digest of the key stories each day in the InfoWorld Daily newsletter. For the latest business technology news, follow InfoWorld on Twitter.
Copyright © 2012 IDG Communications, Inc.