Why those guilty of bad code must pay

Judging by reader feedback on last week's post, most people think it's a terrible idea to make companies liable for bad code. Here's why they're wrong

Last week, I wrote a little ditty about why companies like Universal Music should be held accountable for poor code that allows millions of their users' real names, email addresses, and clear-text passwords to be distributed around the Internet. There was quite the reaction, with many people (presumably coders) yammering that this was the worst idea in the history of ever.

But I too am a developer. I've personally coded dozens of account-based Web applications, and not a single one ever stored a clear-text password. At the very least (back in the day), passwords were hashed during registration and simply matched upon login. I think it might have been 1998 when I wrote my first password-hashing function. And here we are, 13 years later and Universal Music can't be bothered to implement literally a few lines of code to at least obfuscate the sensitive information of their users. That's all it really is -- a few lines of code.

[ Also on InfoWorld: Neil McAllister's classic "Developer error: The most dangerous programming mistakes." | When he isn't stirring up trouble, Paul Venezia likes to explain all about server virtualization. ]

So now that the cat's out of the bag and all those accounts are floating around the Internet, why shouldn't they be held accountable for this negligence? Why should they escape any penalty whatsoever for such egregious corporate practices? I vehemently disagree.

In the United States, at least, very specific laws govern patient information and how it is stored, accessed, and disseminated. HIPAA regulations were put into place to ensure that sensitive patient information isn't distributed to just anyone -- that is, only to the people who need that information. They also prevent health care providers from discussing any type of patient information with anyone else. They were explicitly designed to protect patients, and each patient must sign a waiver to authorize the release of that information to another person or party. Yet we have no regulations on the storage, access, and dissemination of sensitive user information on public websites -- none. Thus, there's almost no business case for providing any form of high-level security for customer accounts.

Sure, many places implement significant security measures to protect their user's data, but that's because those developers and product managers actually have a clue or two. An army of ignorant managers and developers do not; they can barely produce functional products, much less functional and secure products. Those clueless people are the cheapest option when a company contracts out for application development -- often without any idea of what the code actually looks like, only that it functions. Rarely will a company that goes for the low bid on a contract spend extra for an independent security audit.

Yet when the public uses these apps, they have a significant level of trust. I mean, they already have an Amazon account, an eBay account, an account at their bank, and so on and so forth. They're all basically the same, right? And as many studies have shown, users often employ the same user name, email address, and password across the sites they frequent. That dumb move makes an individual's personal privacy and security as good as the weakest link among all those sites.

So while it's illegal for your doctor or nurse to tell someone else about your last visit, it's perfectly legal for a company of any size to collect vast amounts of sensitive user data and release it to anyone who happens to come across it on the Internet. As long as it's not medical information, HIPAA doesn't apply, so there are no repercussions other than a PR hit. That needs to change.

As you read this, someone out there right this second is coding an application that will store clear-text passwords and other information without any form of security. Very likely, the password retrieval system for that site will also email that same password back to the user rather than use a randomized password generator with a one-time temporary password and a fixed expiration time. God help me, they're probably not sanitizing their database calls.

These egregious examples of horrible coding practices need to be regulated; blanket guidelines regarding data compromise miss the mark and address the situation after the fact. Any developer worthy of the name must agree that we'd all be better off if unspeakably poor design choices like those I've described were eliminated forever.

As events have proven time and again, this is not a problem that will regulate itself. Companies will continue to go with the low bid, and the low bid will continue to employ substandard coders and managers. The cycle will continue, unless penalties make it very painful to play so fast and loose with customer information.

This story, "Why those guilty of bad coding must pay," was originally published at InfoWorld.com. Read more of Paul Venezia's The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter.

Copyright © 2011 IDG Communications, Inc.

InfoWorld Technology of the Year Awards 2023. Now open for entries!