Logo

Posts Tagged ‘security’

Firefox 3 Password Manager is a little TOO helpful

Thursday, July 10th, 2008

I was trying to fix a bug today, where saved usernames passwords in Firefox were showing up on other forms, in the wrong fields. Pretty simple, I thought - just change the names of the password fields so they’re different than the login page.

It didn’t work.

Apparently,  the new Password Manager is designed to thwart security such as changing the names of the password fields, so the user has to enter the password. It looks for a field of the same name first, but if it doesn’t find one, it will put the password in the first type=password field it finds - and then puts the username in the text field just before that.

I really can’t see how this is good. But it’s not a bug. This was an intentional design decision by the Mozilla foundation:

Firefox stores passwords with this metadata:

domain usernamefield passwordfield username password

Then uses the usernamefield/passwordfield values as hints to find the appropriate <input> elements within a webpage by matching them to the “name” attribute.

Unfortunately this means that when a website redesigns and changes the un/pw field names, the effect on the end user is that the password is “forgotten”.

As a backup, when usernamefield/passwordfield fail to match, Password Manager should attempt to discover the password field manually, using a technique similar to what Camino uses.

While I understand trying to make things easier for your users, sometimes you can go too far. This, I think, is an example of that. It actually causes usability problems. See an example of a problem this can cause here. While this is a contrived example, it should be easy to see how a complex site could easily face these sort of problems.

Personally, I think Firefox needs to rethink this. It is not a good thing.

Keeping users out

Monday, March 24th, 2008

In the never-ending battle to combat spam, we have, over past years, seen the advent of the “CAPTCHA” - a graphic representation of letters and/or numbers that is, supposedly, readable by humans, but not by computers. The idea is to filter out automated systems that try to sign up for accounts or send email or make posts or whatever, and to only allow real humans through.

It’s not working.

Last November, Jeff Atwood (Coding Horror) wrote about some well known CAPTCHA’s, noting that some were, apparantly, unbreakable. But now, we find that they have all been broken.

By computers.

I don’t know about you, but for me, the Hotmail and Yahoo ones are really tough to read. So much so, that when I first tried to sign up for a Yahoo account, I finally gave up and went elsewhere, because I couldn’t get a CAPTCHA I could actually read.

So, are CAPTCHA’s a good idea or a bad one?

This is part of a bigger question - one that every online service has to deal with: The trade-off between security and usability. There is no such thing as a totally secure website; only greater or lesser degrees of security are possible. Any computer connected to the internet is potentially vulnerable. Any server that permits user input is even more vulnerable. And the easier it is for users to enter data, the easier it is for automated systems to enter data. The easiest system to use is the easiest system to abuse.

So what is the answer?

A number of options have been suggested: picking out one picture from a group, answering what a certain picture is of, answering simple math problems spelled out in words. But these methods can still fall to a brute force approach. Multi-step user verification is becoming more popular, as well - where you are asked to respond to an email in order to gain access. But even these can fail when automated systems are used to respond to the email.I don’t think there will ever be a perfect answer, but I do have one idea to suggest: How about a multi-step verification that has you answer a question?

  1. User “RealGuy” signs up for an account.
  2. The system sends “RealGuy” an email, asking him a simple question (such as “Are you really human?”)
  3. “RealGuy” then clicks on the link in the email, which takes him to a form where he enters the answer to the question - in a text box.

Is that method foolproof? No. But it certainly seems like it would block almost all automated systems, and it should be simple enough for almost all real humans to figure out. How well will it work? I don’t know. But I’m thinking of trying it out quite soon - and nothing is quite as telling as a real-life test of something to tell you how well it actually works. I’ll report back once I get some feedback.