In the never-ending battle to combat spam, we have, over past years, seen the advent of the “CAPTCHA” - a graphic representation of letters and/or numbers that is, supposedly, readable by humans, but not by computers. The idea is to filter out automated systems that try to sign up for accounts or send email or make posts or whatever, and to only allow real humans through.
It’s not working.
Last November, Jeff Atwood (Coding Horror) wrote about some well known CAPTCHA’s, noting that some were, apparantly, unbreakable. But now, we find that they have all been broken.
I don’t know about you, but for me, the Hotmail and Yahoo ones are really tough to read. So much so, that when I first tried to sign up for a Yahoo account, I finally gave up and went elsewhere, because I couldn’t get a CAPTCHA I could actually read.
So, are CAPTCHA’s a good idea or a bad one?
This is part of a bigger question - one that every online service has to deal with: The trade-off between security and usability. There is no such thing as a totally secure website; only greater or lesser degrees of security are possible. Any computer connected to the internet is potentially vulnerable. Any server that permits user input is even more vulnerable. And the easier it is for users to enter data, the easier it is for automated systems to enter data. The easiest system to use is the easiest system to abuse.
So what is the answer?
A number of options have been suggested: picking out one picture from a group, answering what a certain picture is of, answering simple math problems spelled out in words. But these methods can still fall to a brute force approach. Multi-step user verification is becoming more popular, as well - where you are asked to respond to an email in order to gain access. But even these can fail when automated systems are used to respond to the email.I don’t think there will ever be a perfect answer, but I do have one idea to suggest: How about a multi-step verification that has you answer a question?
- User “RealGuy” signs up for an account.
- The system sends “RealGuy” an email, asking him a simple question (such as “Are you really human?”)
- “RealGuy” then clicks on the link in the email, which takes him to a form where he enters the answer to the question - in a text box.
Is that method foolproof? No. But it certainly seems like it would block almost all automated systems, and it should be simple enough for almost all real humans to figure out. How well will it work? I don’t know. But I’m thinking of trying it out quite soon - and nothing is quite as telling as a real-life test of something to tell you how well it actually works. I’ll report back once I get some feedback.