Archive for the ‘Programming’ Category

Firefox 3 Password Manager is a little TOO helpful

Thursday, July 10th, 2008

I was trying to fix a bug today, where saved usernames passwords in Firefox were showing up on other forms, in the wrong fields. Pretty simple, I thought - just change the names of the password fields so they’re different than the login page.

It didn’t work.

Apparently,  the new Password Manager is designed to thwart security such as changing the names of the password fields, so the user has to enter the password. It looks for a field of the same name first, but if it doesn’t find one, it will put the password in the first type=password field it finds - and then puts the username in the text field just before that.

I really can’t see how this is good. But it’s not a bug. This was an intentional design decision by the Mozilla foundation:

Firefox stores passwords with this metadata:

domain usernamefield passwordfield username password

Then uses the usernamefield/passwordfield values as hints to find the appropriate <input> elements within a webpage by matching them to the “name” attribute.

Unfortunately this means that when a website redesigns and changes the un/pw field names, the effect on the end user is that the password is “forgotten”.

As a backup, when usernamefield/passwordfield fail to match, Password Manager should attempt to discover the password field manually, using a technique similar to what Camino uses.

While I understand trying to make things easier for your users, sometimes you can go too far. This, I think, is an example of that. It actually causes usability problems. See an example of a problem this can cause here. While this is a contrived example, it should be easy to see how a complex site could easily face these sort of problems.

Personally, I think Firefox needs to rethink this. It is not a good thing.

Browser Wars: Why IE still wins

Tuesday, June 17th, 2008

Today marks the release of the long-awaited Firefox 3. By all reports, an good browser with great features - and when the site is back up and running, I’ll probably download it myself.

But despite the increased adoption of alternate browsers such as Firefox, Opera, Safari, etc., Internet Explorer remains the dominant browser, and will continue to be the most-used browser for the foreseeable future.

Many will disagree - however, let me explain why I believe this:

The problem is not on the user end. There are many thousands of alternate browser users who are enthusiastically spreading the word. Personally, I recommend Firefox to just about everyone - and most have tried it, and prefer it to IE. No - there is definitely no problem getting users to switch away from IE.

The problem is on the developer side.

I recently wrote about some problems I had with a couple websites. In one of those cases, I lost some good tickets at the Orange County Performing Arts Center because they don’t properly support non-IE browsers. The bigger problem is that this is not an isolated occurrence. Far too many websites fail to account for alternate browsers.

This is something I simply don’t understand. But it appears that there are many developers out there who are unaware of either the existence of browsers other than IE, or just don’t care to learn how to make their sites work on other browsers.

But the problem doesn’t stop with the developers. The real problem lies with the management of the companies who hire these developers. They hire people who work only with IE, because they are not aware, or don’t care, that there are other browsers.

Now, I can’t imagine why any company would knowingly neglect over 20% of its potential customer base, so the only conclusion I can reach is that this failure is due to ignorance. The people running these companies either don’t know about alternate browsers, don’t know the number of people using alternate browsers, or don’t know that different browsers require different coding.

To this end, I am launching the Alternate Browser Education Initiative (at http://browsereducation.org), a non-profit group aimed at educating businesses about alternate browsers and how to correctly build websites that function on most, if not all, of the available browsers.

I firmly believe that this is the only path to real browser choice. I hope you will join me in my effort.

Quality isn’t important?

Tuesday, May 27th, 2008

So often, it seems that companies with a web presence either have no clue or just don’t care about how they’re seen on the other end.

One of the latest prominent examples of this: The PayPal bug that prevents users from making payments to merchants in different countries. Unlike the typical website bug, this one is getting a lot of attention (also here and here), and is apparantly costing people a lot of money.

But the fact is, this is rather typical of many websites. For some reason, it seems that many companies are either not aware or just don’t care about obvious, easy to fix bugs on their websites. Proof of this can easily be seen if you are a FireFox user: install the FireBug plug-in, turn it on, and just watch the errors pile up. As I write this, I see 18 errors showing - some from sites that I’ve linked to here. (Hey, CenterNetworks, did you know that your JavaScript function srExecute is not defined?)

What’s really frustrating, though, is the total lack of response or concern from these companies when you tell them about a problem.

Last week, I needed to rent a truck to deliver some servers to our new data center. When you need a truck for hauling, who else do you think of but U-haul? So I went to the U-haul Reservations page and tried to reserve a truck. When I clicked on the “continue” button, I got no response, until I eventually got a server time-out error. No big deal - I’ll just give it another try. After all, sometimes servers go down.

Well, I tried multiple times over three days, using different browsers (because I know that many companies don’t seem to realize that there are other browsers besides Internet Explorer) . But even with all those tries, I still was getting a server time-out.

I contacted the U-haul webteam to let them know about the problem. Their response was:

Your web browser must be set to allow cookies to complete a reservation online. Please enable cookies and try our website again…go to http://reservations.uhaul.com/ and click on either One-Way or In-Town to proceed with your quote. Once you have received your quote and are ready to make a reservation, you will be prompted to enter your contact information.

Let me get this straight: me not having cookies set is the reason their server isn’t responding? Well, I sent them specific details explaining how to reproduce the bug that I was getting. So far, I have heard nothing back, and the problem still exists.

But there are times when bugs on websites cause more than just a little frustration. I recently purchased tickets to take my family to see Phantom of the Opera at the Orange County Performing Arts Center. I went to their online reservations page, found some good seats, and entered my information. But when I submitted the form, it failed to process - apparently because the billing address in my OCPAC account was different from my current billing address.

Now, I had entered the new address in the previous form. So I went back and tried it again, and this time I noticed that FireBug was showing an error on every keypress! It turns out that the form was using window.event - which is proprietary to Internet Explorer. The data was not getting through when I submitted the form, and it looked to be related to this.

So I tried to get the tickets using IE, but was unable to find any decent tickets this time. At this point, I was getting rather frustrated, and called the ticket center. It turns out that the tickets that had been on hold while I was trying to buy them had since been sold to someone else, and now there were no good orchestra level seats remaining. I ended up having to get balcony seats way off to the side, because their web programmer didn’t know how to make a cross-browser compliant website.

A complaint made by their website, as well as one made by phone, has gone unanswered for over a month now.

I simply don’t comprehend why companies are so resistant to dealing with problems like this - problems that should be trivial for any competent developer to address. Is it that the people in management are unaware of the problems, or is it that they just don’t care? Or is it because it’s so difficult to find competent programmers?

Ultimately, what needs to happen is that the managers of these companies need to understand that people use a variety of browsers, and that they are hiring developers who either don’t know how to develop for multiple browsers, or who don’t care. And these managers need to understand that this lack of support can cost them, especially when the alternative browsers are now reaching as much as 20% of the market.

But, sadly, I don’t see this ever happening. Not when even supposedly web-savvy companies like Google use javascript:void(0) on their personalized home page.

Library Update

Saturday, April 12th, 2008

Yesterday, I needed the ability to make a POST request via AJAX. My Simple Ajax Library was lacking in this ability. So I added it to the code, and, of course, have now published it here.

Site Announcement: HTML Tutorial

Tuesday, March 25th, 2008

My step-daughter wants to learn how to make her own web pages. I figured I would teach her. But suddenly, it wasn’t just her - half the family wants to learn! And they’re spread out all over the country!

Well, there’s only one way to deal with that sort of situation: Write a tutorial.

So here I go: Starting today, I am introducing my basic HTML tutorial. I know there are plenty of others out there, but I’m hoping mine proves useful, at least to some people. I’m trying to keep it as simple and straightforward as possible. As always, your feedback is welcome.

Keeping users out

Monday, March 24th, 2008

In the never-ending battle to combat spam, we have, over past years, seen the advent of the “CAPTCHA” - a graphic representation of letters and/or numbers that is, supposedly, readable by humans, but not by computers. The idea is to filter out automated systems that try to sign up for accounts or send email or make posts or whatever, and to only allow real humans through.

It’s not working.

Last November, Jeff Atwood (Coding Horror) wrote about some well known CAPTCHA’s, noting that some were, apparantly, unbreakable. But now, we find that they have all been broken.

By computers.

I don’t know about you, but for me, the Hotmail and Yahoo ones are really tough to read. So much so, that when I first tried to sign up for a Yahoo account, I finally gave up and went elsewhere, because I couldn’t get a CAPTCHA I could actually read.

So, are CAPTCHA’s a good idea or a bad one?

This is part of a bigger question - one that every online service has to deal with: The trade-off between security and usability. There is no such thing as a totally secure website; only greater or lesser degrees of security are possible. Any computer connected to the internet is potentially vulnerable. Any server that permits user input is even more vulnerable. And the easier it is for users to enter data, the easier it is for automated systems to enter data. The easiest system to use is the easiest system to abuse.

So what is the answer?

A number of options have been suggested: picking out one picture from a group, answering what a certain picture is of, answering simple math problems spelled out in words. But these methods can still fall to a brute force approach. Multi-step user verification is becoming more popular, as well - where you are asked to respond to an email in order to gain access. But even these can fail when automated systems are used to respond to the email.I don’t think there will ever be a perfect answer, but I do have one idea to suggest: How about a multi-step verification that has you answer a question?

  1. User “RealGuy” signs up for an account.
  2. The system sends “RealGuy” an email, asking him a simple question (such as “Are you really human?”)
  3. “RealGuy” then clicks on the link in the email, which takes him to a form where he enters the answer to the question - in a text box.

Is that method foolproof? No. But it certainly seems like it would block almost all automated systems, and it should be simple enough for almost all real humans to figure out. How well will it work? I don’t know. But I’m thinking of trying it out quite soon - and nothing is quite as telling as a real-life test of something to tell you how well it actually works. I’ll report back once I get some feedback.

New Application Announcement

Wednesday, February 20th, 2008

Version 0.1 (pre-release) of my “To do list” is now available at http://www.patternsofchaos.net/todo/. There are a number of issues still to resolve - especially with regard to date handling, but it’s good enough to put out and start testing.

I want to hear your thoughts, so please, share your thoughts - use the comments form on this page to tell me what you think.


Monday, February 11th, 2008

I have long been an advocate of usability on the web: basing design on what is easiest for the user. Far too often, we worry too much about what looks good and too little about the ease of use. We forget that without users, nothing else really matters.

I recently found a great article on usability. As I was reading it, I kept thinking “Am I doing that?” I think I will be referring back to this article for some time, as I work out the layout for this site and other projects.

Simple AJAX library

Saturday, February 9th, 2008

So here is my first offering: A really basic, easy to use JavaScript AJAX library. Nothing fancy here - just a quick & simple way to get Ajax working on your site.