The relatively short history of the Internet is littered with bad ideas and good ideas that were poorly executed. Unfortunately, bad design or execution is never a guarantee of failure. Otherwise, we would never have had to deal with VeriSign's SiteFinder, CSS, or Lotus Notes.
The best among us will always aim for the optimal outcome of any project, technology, or framework, given the obstacles in play at the time. This usually means compromise and invention in order to deliver a functional result in a suitable timeframe.
[ Also on InfoWorld: Life's too short for fragile networks | Get expert networking how-to advice from InfoWorld's Networking Deep Dive PDF special report. | For the latest practical data center info and news, check out InfoWorld's Data Center newsletter. ]
Entire programming frameworks have been written to solve a single insurmountable problem for a specific project. Tools like Redis and Memcached were not born from a random brainstorm, but to address an immediate and massive need. Hell, John Carmack wrote video drivers for Doom because the ones developed by the hardware manufacturers were not up to the job. Talk about taking a digression from the core project.
When we run into these problems, we're usually in a tight spot, both in terms of project scope and time. One of the hardest parts of project management is determining which of several paths will lead to acceptable results on a reasonable schedule, while knowing that once chosen, there will be few moments to backtrack and try another route. It may look like a better idea to modify an existing framework to meet your needs rather than write a new one, but once you get into the weeds with the existing framework, you could easily discover a blocking design issue that will require starting over from scratch. And so it goes.
Once we stray into those waters, the impulse to move quickly and cut corners is overwhelming. Solid design gives way to easiest immediate solution. Project organization suffers and documentation is stillborn. As deadlines loom and budgets creep, these elements grow heavier and harder to avoid.
In those days, in those meetings, the core focus should be on limiting the damage of hasty short-term decisions, and not creating your own blocking design problems for the next iteration. That's easy to say, but hard to do in many cases.
It may seem that I'm talking about development here, but this applies to all walks of IT life, from desktop rollouts to core networking infrastructure design. It's certainly more prevalent with the code wranglers, but none of us are immune. Usually, those who commit these sins are not the ones who have to support the result.
When you look at some of the legendary designs, such as Paul Mockapetris and Jon Postel's work designing DNS, or the design of TCP/IP, you see imperfect designs that did not constrain themselves unnecessarily. The ability of DNS to scale to the level it has is nothing short of breathtaking. TCP/IP v4 still runs the world, for better or for worse, and it may be the design of IPv6 is too much of a departure from IPv4 to truly take hold.
It's not true that the best projects deliver a functional project on time and under budget -- they also need to have been designed with future extension and their eventual replacement in mind.
This story, "Tech traps 101: How to meet deadlines without sabotaging the future," was originally published at InfoWorld.com. Read more of Paul Venezia's The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.