home | books | articles | gleanings | case studies | hire
other sites: widgetopia | blueprints for the web | metafooder


 


 


« play | main | i love my tech editors »

required reading

If you run usability tests, you need to read Usability Testing: You Get What You Pay For. I'm not sure about many of the conclusions she drew from the fact that different usability companies charge differently and find different problems (Jared, maybe you have some thoughts on this). It seems to mostly be "People are charging too little so therefore they must be doing it wrong." I have no doubt a lot of people are doing it wrong; I've seen most of the mistakes she lists as well. But I suspect there is more to the difference in results and cost than just wrong and right.

Her assessment of common mistakes made by novice usability testers and how to correct them is dead on. It is well worth reading and seriously asking yourself Am I guilty of this? It's rare to find a practical article with applicable advice. This is one. Reminds me of one of my favorite books, By People, For People. If you don't have this book, I encourage you to pick it up. It deals with many of the questions Mayhew brings up like sample size and how to avoid influencing Think Aloud protocols.

Posted at June 19, 2002 10:37 AM


Comments

 

Mayhew's really good article isn't about the cost of usability testing. But it clearly warns about the difference in results -- interpreted conclusions and "straight" data collection -- that can reveal much about the biases and pratfalls of untrained or novice testers.

There's an implied argument that better testing, with strict protocols and experienced and sensitive testers, will result in "truer" results. In that sense, the wheat will separate from the chaff: because accuracy can command a higher price, you WILL eventually "get what you pay for." But all this is more common sense, or the American way, or (at most) implied, not stated.

Christina, do you see any fundamental barriers to solving the problems Mayhew describes? Improving protocols, critical thinking and heuristics will increase the validity of the data collected, and there are many easily-referenced articles on improving survey design, task analysis, etc.

How can testers best remove their blinders, their biases, their "fatal flaws"? what would be their incentive, and how would testers know they need help in the first place?

Would like to hear your thoughts.... Thanks for pointing to her article!

Melissa

Posted by Melissa Bradley at June 20, 2002 08:01 AM


~~~



Post a comment
*Name:


*Email Address:


URL:


Remember me?

Comments:

bold italic underline link


posting can be slow; please wait a few seconds before hitting the button again.

The extra-fine print
wording stolen by the more-eloquent-than-I kottke
The bold, italics, and link buttons (and associated shortcut keys) only work in IE 5+ on the PC.
Hearty discussion and unpopular viewpoints are welcome, but please keep comments on-topic and *civil*. Flaming, trolling, and ass-kissing comments are discouraged and may be deleted.
All comments, suggestions, bug reports, etc. related to the comments system should be directed to me.


mail entry to a friend

Email this entry to:


Your email address:


Message (optional):




« play | main | i love my tech editors »

 

 

 

home | books | articles | gleanings | case studies | hire
other sites: widgetopia | blueprints for the web | metafooder