CAPTCHA me if you can!
No, it’s not my Boston accent rearing its ugly head again. There’s a new IT acronym, CAPTCHA, that really rolls off the tongue and that enterprise security folks ought to know about if they don’t already.
While IT security pros have been working hard on systems to make sure users are who they say they are, Web 2.0 developers have been studying a related problem: how to make sure users are actually human beings, rather than machines.
The result is a variety of implementations of CAPTCHA, which stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart. You can see CAPTCHA at work in those little boxes on large Web sites like Yahoo or Ticketmaster, where you must input some distorted letters in a box before proceeding to buy your concert tickets or open an e-mail account. It’s a crude line of defense against bulk spammers and their ilk.
Now before delving into the merits of CAPTCHA, a quick digression on these letter boxes themselves. I don’t like them as much as their spiritual predecessor, the America Online hyphenated starter passwords. Whereas CAPTCHA strings are just a few meaningless letters and numbers, such as TOF7K9, I found the AOL passwords to be intriguing, laden with potential meaning for the superstitious among us (thistle-hubcap, charmer-boxtop, and so on). They made it seem as if the AOL computers had some imagination, not just a randomization algorithm.
In any case, CAPTCHA, like many authentication schemes, suffers from the childproof-cap problem: It doesn’t fully keep out unwanted intruders, while frustrating the heck out of many legitimate users. As a W3C Working Group Note on CAPTCHA reported, “this system can be defeated by those who benefit most from doing so ... spammers can pay a programmer to aggregate these images and feed them one by one to a human operator, who could easily verify hundreds of them each hour.”
In the meantime, CAPTCHA schemes put off whole groups of humans, primarily the visually impaired, but also people with dyslexia and short-term memory problems (for more on this see the W3C’s Web Accessibility Initiative at w3.org/WAI). Not to mention the average user who gets to the extra CAPTCHA screen and just decides, Hey, I didn’t really want to see Whitesnake in concert anyway.
For financial services firms, there may be some interesting learning here in the run-up to compliance with the FFIEC two-factor authentication guidelines later this year (for example, many sites are now offering audio CAPTCHA for the visually impaired). But for the rest of us, there are broader questions around the future of authentication in a machine-to-machine, human-to-machine world.
If we suddenly have a lot of machines trying to pass themselves off as humans, in addition to the many human hackers already trying to pass themselves off as machines, what will the future of authentication look like? Will two-factor authentication for certain processes ultimately require both human and machine input? Will I have to hum a little launch code so my robot driver can start my car in the morning? Will the last defensible human competency not replicable by machines be their ability to authenticate? I don’t know, but I will say this: Open the pod bay doors, Hal.