Anyone can write code. Not everyone can throw it away.
Mr. Z was the black-and-white darkroom instructor at one of the top-rated schools of photography. In one class, he picked out a negative from each student's portfolio and said, "Print this." One would-be professional photographer burned and dodged, spilled fixer all over himself, and proudly showed the results to the instructor. Mr. Z glanced at the printed photograph and said, "It's too dark. Try again." On the next attempt, Mr. Z said, "Not enough contrast. Print it again." Iterate until exhaustion.
Finally, after the student's clothes were permanently impregnated with the smell of fixer, Mr Z. approved of the photo. "Very good!" he said. "Now, throw it away."
"What?!" replied the student. "You just said the photo was good!" Mr. Z responded, "It is printed very well. Very good technically. But it's an uninteresting subject. Throw it away."
That lesson stayed with the photographer—who later spent a few years as a black-and-white darkroom manager—but it's one that developers need to learn, too. Just because your code works doesn't mean it's done. Your application might meet the specs. But that won't make it great.
That working app won't make you a better programmer, either. Because I've come to believe that all great software is written three times. The first time you write it, it's to see if your idea can work at all. It's the digital equivalent of scratching something out on the back of the envelope, leaving out the fancy stuff and just concentrating on the basic feature or algorithm. Once you figure out that yes, this might be a good way to solve the problem, then you write the code a second time, to "make it work." But it's the third time you write the code, when you've had the opportunity to learn from the mistakes from the "try to make it work" phase, when your application will be the best it can be. (Well, almost. There's often a 3.1, too. Even great software has a few bugs.)
Don't trust me on this? Look at all the "best of" software you ever worked with. I sure didn't love Window 3.1, but it was certainly the apex of Microsoft's vision and architecture of the time. That "third try" might not have a 3.0 or 3.1 version number, depending on marketing decisions (it used to be easier to get someone to upgrade from 4.0 to 5.0 than to pay real cash for 4.1) and on the nature of the development community (particularly in the open source universe) which is happy to trundle along for years with version 0.23 no many how many design iterations the code has gone through.
Not all great software has a Version 3 sticker, because sometimes the development staff get to throw away version 1 or 2 before the product is launched. But look at the software you most loved, before it was overloaded with feature-itis. I betcha it was the third try.
The hard part, though, is learning to throw out code and start over. It always seems easier to edit and debug the code you've written, but every brilliant programmer I've known (and I've known quite a few!) has talked about the need to start with a clean editor screen and write the routine anew, without reference to the old code. It's long been a tenet at the Schindler bitranch that when you find a block of code with several bugs, it's time to dump the whole file and write it again from scratch. It's faster than trying to squash all the bugs in your bug factory. Because bugs do tend to congregate, and hide behind one another.
Maybe it's because the initial design premise for that block of code was wrong, or perhaps it's just because you (or another developer) wrote it with a hangover. But propping up your "quick and dirty" solution by investing more time and effort into it is a fast track to a big ball of mud. If not a doomed project.
But for your own career development: The real reason to throw away code and start over is that it forces you to re-think what you're doing, so you have a better chance of discovering a better solution to your problem. As Guido Von Rossum describes in his history of Python:
Sadly, I'm sorry to say that raising an overflow exception was not really the right solution either! At the time, I was stuck on C's rule "operations on numeric type T return a result of type T." This rule was also the reason for my other big mistake in integer semantics: truncating the result of integer division, which I will discuss in a later blog post. In hindsight, I should have made integer operations that overflow promote their result to longs. This is the way that Python works today, but it took a long time to make this transition.
In other words, you do have to re-think some things to make them work better. And you won't do that by "fixing the code" in an existing module.
You might be hollering at the screen by now, telling me that it's all well-and-good for me to tell you that you should be willing to start over, except that in your shop they're all screaming for that app right now, and you can go back and "fix it" later. Except of course you never do get back to it, because there's always the next crisis to deal with. I get that. There are times that responding with a Yes Ma'am is the right approach (because truthfully, not every line of code you write is going to be your best work). And if you're lucky or smart enough to work in an office that lets you add functionality one step at a time, you can often throw out the "wrong" code... one step at a time.
But even in the worst cases, you have to take the attitude that you're the professional, and it's your job to serve your client/user. It might appear to the passenger in the back seat that driving across the parking lot is the fastest way to the entrance, but it's really not the safest or the most efficient route.
And in particular, the Yes Ma'am approach won't help you become a brilliant programmer. Because brilliant programmers are always willing to throw out a "working" solution in order to find a better one.
This story, "Becoming a Great Programmer: Use Your Trash Can" was originally published by JavaWorld.