I was sitting in a meeting with an aggressive project manager who really wanted to stick it to a competing project manager. This manager promised that the code coming out of his team would have "documentation," he said, before pausing like James Bond introducing himself and saying, "Extensive documentation."
The only thing worse than no documentation is a bookshelf filled with binders stuffed with extensive documentation. Some project managers love to measure their progress by the pound, and they see more words as better words. Anyone who's slogged through "Moby Dick" in high school English knows the feeling.
We want to know information about the code, but no one has an acronym for Too Little Information. Everyone on Facebook knows the initials TMI.
There is no easy answer for the programmer. Ideally, the code is clean enough and self-documenting enough to avoid the need for long paragraphs explaining just what the code is doing. Code-based documentation doesn't get left behind like text documentation when someone rewrites the code but doesn't get around to the text.
There is hope that an even smarter collection of debuggers and code-analyzing compilers will be able to do a better job understanding our code. The latest versions of the virtual machines keep copious records of which routines are executed. While most of the emphasis is on performance, this kind of meta data can be more useful than real documentation by identifying when data is changed.
But it will be years before we can drink the Kool-Aid and dream about artificial intelligence understanding our code. For now, we're stuck with the problem of how to create just enough documentation to keep everyone happy without shortchanging the feature set.
It's so much easier to call up a new server from the cloud than to fill out a requisition form and ask the server maintenance folks to buy a new one. You push a button and you have your own server.
Alas, this approach can be costly. The servers may be only a few pennies for an hour, but they add up when everyone wants their own cluster for each project. The next thing you know, there are hundreds of servers in the cloud, most of them created by people who left for other jobs several years ago. It's cheaper to keep paying the bills than to figure out what they do.
To make matters worse, the servers aren't your own. Some companies are famous for writing terms of service that are very one-sided, claiming, for instance, the ability to shut down your machines for "no reason." That seems to be changing as cloud vendors recognize that such overreaching drives away the customers with the most money. But no one knows what happens if you encounter problems in the cloud. Sometimes it helps to control the paycheck and retirement funds of the person responsible for the server staying up.
The more you outsource, the more you lose control and spin your wheels trying to recapture it. The less you outsource to the cloud, the more you spin your wheels keeping everything running.
You're damned if you do, damned if you don't.