A process that employs people as testing participants who are representative of the target audience to evaluate the degree to which a product meets specific usability criteria.

This is the definition offered by J. Rubin and D. Chisnell. We all now pretty much how it goes: you get a few people who fit the profile of your target audience, you ask them to do some tasks with your software – or with a simulation of that software i.e. a prototype, and then you watch as they attempt to complete the tasks.

We record usability testing sessions. Recordings become a very useful memory aid that complements our notes when required. However, the most important reason why we record tests is related to video as a communication tool. Even the teams most reticent to believe that anybody could have difficulties using their beloved software will cave when watching real people having real trouble. Video constitutes unequivocal evidence of the existence of usability issues.

Useful as this is, I believe the real power of video comes from its ability to generate empathy. For design and development teams, the “end user” is an abstract entity. By watching usability testing videos, that abstract entity gains a face: it becomes real people, trying to do real things with software, and having real problems when doing so.

The team making the software will remember this people for months. They become a point of reference when making design and development decisions. Team members start saying things like: “Do you see that button, remember that girl that couldn’t find the button, do you think SHE would find it now? Is it big enough FOR HER? Would it look clickable TO HER? Is it in a place where SHE would expect it to be?” By this “concretisation”, users are brought into the centre of the software-making process. This is something we normally attempt to achieve using personas, although I must confess that where my personas have failed miserably, I have succeeded using videos from usability testing sessions.

Advertisements