Reflections on Usability Tasks

Before I entered the Outreachy internship, I had to present a “first contribution” — perform a small formal usability test, and perform some basic analysis using the heat map method. In this contribution my mentor Jim asked me to find ten scenario tasks used in other GNOME usability tests, and perform my own usability test with a few testers.

Instead of using pre-written scenario tasks I decided to create my own tasks, and made a lot of mistakes in them 🙂 So today I want to reflect on my first contribution, and write about my experience and mistakes I made in these scenario tasks.

200px-Gedit-logo-clean

One of the applications I chose for usability testing was gedit — the default text editor of the GNOME desktop environment. It is designed as a general-purpose text editor.
First I came up with a list of some general user goals that gedit users might have:

  1. Autosave the file
  2. Change the font size
  3. Correct spelling mistakes in the text
  4. Find and replace text
  5. See the document statistics

After choosing the tasks to test, I formulated scenario tasks for usability testing:

You work as a freelance copywriter. This time you are writing a book review.
G1. Your laptop battery got broken, and if the laptop suddenly goes off you might lose recent changes in your document. So before you start, make sure that all changes you make in the document will be automatically saved every 10 minutes.
G2. Make the text bigger.
G3. Correct all spelling mistakes in the text.
G4. You realize that misspelled the last name Feinman. Replace Feynman with Feinman in the whole text at once.
G5. You need your review to be less than 200 words. Check if the size of the document is fine.

Now let’s see which of these scenario tasks were well-crafted.

G1 is a good task, as it resembles a real situation in which a user would want to do the certain task — turn on autosave. The task is specific enough, and at the same time it is short and provides just as much information as a user needs to complete it.

G4 and G5 are realistic tasks too, I think people often find themselves in similar situations when using text editors. There are no task-solving cues, and the descriptions are at the right level of detail. I asked testers to do something specific, set the context and used user’s language to make the tasks clear.

All these scenario tasks worked well during the usability test, as they were well-formulated, actionable and written clearly enough that the testers knew when they completed the task.

There also were scenario tasks that I would improve in the next usability tests.

G2. Make the text bigger.
This is a very abrupt scenario task, and a tester might be confused by the task itself. There is no context, no reason or purpose for performing the task. I wanted to keep the task short, but it’s more important to provide the participants with all the information that they need to complete it.

G3. Correct all spelling mistakes in the text.
This task doesn’t set a context too. More importantly, it provides an unintended hint to the tester by re-using keywords from gedit interface. Task scenarios that include terms used in the interface bias testers’ behavior and give less useful results, and it is essential to avoid giving such clues in usability testing.

It requires time and practice to evolve skills in writing tasks. But with help of a bright mentor and some effort — we’re almost there! Soon I’ll start working on a new usability test and hopefully will write good scenario tasks that will help to uncover usability issues effectively.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s