Project Recap

With the USeD project drawing to a close it is a good time to recap on what we set out to achieve and how we went about it.

Overview

The USeD project aimed to improve the usability and learnability of a user interface which enabled users to download spatial data from the Digimap service. The current data downloader is a popular service but is perhaps not the most user friendly.  It was designed around the technical constraints of the software of he time (2002) and the requirement that it had to integrate with an inflexible server-side database.

It’s replacement would be designed around the needs of the user but would still have to integrate with a server-side database. However, the new database is more flexible and data extraction far simpler.

The new interface must serve all users from experienced users who know what they want to complete novices who are perhaps less confident working with spatial data.  The interface should therefore be intuitive and learnable, allowing users to explore some of the advanced functionality as they gain confidence. You can read a detailed summary on the  About USeD page.

Persona

The first task was to interview some users and create a set of user personas. 20 users were interviewed and this resulted in 5 distinct personas.  The personas would be used to shape the user requirements and would be used to steer the design of the interface throughout the project.  You can meet our personas on the persona page.

Design Specification

The design specification can be divided into 2 parts; user requirements and a technical specification.  The user requirements were defined from the user requirements.  In the personas we had created a list of “pesron X wants to” and “We would like person X to”.  which made it quite a simple task to put together an initial list of requirements.  We grouped the requirements into:

  1. a user must be able to
  2. a user should be able to
  3. a user could be able to

Grouping the requirements like this gave the engineers an idea of the importance of each requirement which made it easier to justify spending more time implementing small functions that were deemed to be a must. The user requirements documentation can be found here

The technical review focused on the software and libraries that could be used to make the new interface more attractive and interactive. The server side database had already been updated so new tech had to integrate with this.

Prototype or Full Build?

This was a key question in the project. Do we use wire-frame mockups to show different designs or do we use a fully functioning test site?  We went with the fll build as we suspected that there would be issues surrounding the strong visual map window and the expectation of what the used would receive in their order. It was felt that a wire-frame would not address these issues. Building fully functioning test sites involved far more developer time and effort, but it was certainly worth it.

Iterative User Testing

We used task based testing to explore the usability of the new interface.  We started with an expert review from our usability consultant, this caught a number of issues that we had missed. The task based testing engaged with real users. Each user had 45 mins to complete a number of tasks and we tried to have 6 people per session. The interface was then modified between sessions. We ran 3 sessions and saw our “problem points” migrate from the initial screen through the ordering process. This was encouraging as it suggested that users were able to progress further in progressive session before they ran into problems. The user testing is described in detail in a number of post.

Project hand-over

Handover

Handover – Tableatny @ Flickr

At the end of the project we will hand over our findings to the Digimap Service team. The hand-over will be a document that outlines a number of improvements that can be made to the existing interface. Each recommendation will be ranked as either High, Medium or Low.  Each recemondation will address an identified isue in the current interface and will suggest a solution which has been implimented and tested during the USeD project.  Where multiple solutions were trialed, a brief summary of this will be given to justify the final suggestion.

This style of documentation proved to be a very effective way of suggesting improvements to the development team.

 

Usability lab on a shoestring budget

Usability testing should be an important part of the development of any user interface. Ensuring that the interface is intuitive and easy to use is critical for its success. However, running usability sessions with real users often strikes fear into project teams. They assume that it will be a costly and time consuming process and will confuse as much as it clarifies the design process.  This article aims to demonstrate how easy it is to set up an effective usability lab on a shoestring budget.

Background

The USeD project aims to improve the interface of a data download website which provides   spatial data to the education sector in the UK.  User testing is an integral part of the USeD project and carrying out iterative assessment exercises will drive the development of the interface.  However, the project budget is quite modest and most of it is assigned for designing and coding the interface.

A discussion with our usability expert on the usefulness of various techniques suggested that most issues with an interface could be identified using quite simple techniques such as task-based exercises. Eye tracking allows testing to focus on very specific problems and it was better to identify general issues first before considering advanced techniques.

User Task Based Testing

Task based testing centers around setting users a series of small, distinct tasks that have been designed to test the functionality of an interface.  The initial tasks should be quite straight forward but later ones can be more involved allowing sessions to explore more advanced aspects of the interface.  Tasks should give the user a clear understanding of what they want to achieve but should allow them the flexibility to explore the interface. This flexibility can reveal how users discover functionality in the interface.  In these testing sessions we have 6 tasks and each session will last up to 45 minutes. Any longer than this and it is probably that the user will tire and loose focus.

So, how can you set up an effective user testing lab in your own office using pretty much “stuff” that you find lying around or “borrow”, temporarily?  The recipe below describes how we went about the task.

Ingredients:

  • 2 rooms, close together or preferably next to each other
  • 2 computers
  • 3 screens
  • 1 web cam
  • 1 mic
  • 1 set of baby monitor
  • A sprinkle of free software
  • 1 really helpful systems support person

First of all, having two rooms is a huge benefit as it means that the only the candidate and the facilitator (person running the test) need to be in the test room. This reduces the stress on the user during the test so that it feels less like a test. A nervous or flustered user will not interact with the interface in a naturally which may affect the results of the tasks.  Having the rooms next together makes things much easier as you can run cables between them.

Test lab

Test Room

  • Set up a computer that is typical of the ones you expect users to access the interface through in normal use. If users are likely to use a laptop or a 15 inch monitor, it would be unfair to run the test on a 21 inch monitor.
  • Set up a web cam that shows the user and the facilitator. This should be set up in an unobtrusive way and is to monitor general body language rather than detailed facial expressions or eye movements.
  • Position the transmitting part of the baby monitor so that it will pick up the conversation
  • Place a microphone dictaphone to capture the conversation between the candidate and the facilitator. This is really just a back up in case parts of the conversation get missed.
  • Make sure you provide some water for the candidates and a bit of chocolate never hurts.

Observation room

The observation lab can be set up in various ways but if you have access to two monitors then this makes things easier.

  • Set up the computer with a “Yâ€� splitter to two monitors. Monitor 1 will show the users screen and monitor 2 will display the webcam feed.  Set the monitors up about 1.5m away from the observers.  This will give them room to make notes and setting the back a bit means that they can easily scan both monitors at the same time without the “watching tennis” effect.
  • The receiving part of the baby monitor will provide the live audio from the other room.
  • Remember some water and chocolate or sugary sweets to keep the observers alert

 

Observation room


Porting the display

To display the users screen, we used some free software called “Zonescreen�. This has to be installed on both computers. Once installed, start ZoneScreen on the machine in the user lab, set this to as the HOST. Make a note of the i.p address. On the computer in the observation room, start ZoneScreen and set the session to REMOTE and enter the i.p address of the computer in the other room. You should now be able to see everything that happens on the user computer.

Webcam

The webcam feed is a little bit trickier. We experimented with broadcasting this across our network, but there was often a lag of up to 20-30seconds which made it very difficult to follow what was actually going on. As we had the luxury of having two rooms next to each other, we were able to connect the webcam to the computer in the observation lab. To do this you need a powered USB extension. The 10m extension we used occasionally failed, possibly as the power attenuated along its length. Replacing this with a 5m cable solved the problem.

Results

This set up worked really well.  The observers were able to see the candidates screen, hear everything that was said.  The webcam was useful to give everything context.  You could tell when the candidate had turned to speak to the facilitator and you could monitor their general body language.  There was only the slightest delay on the screen display feed, but this did not cause a problem. The baby monitors might seem very low tech but they are reliable and effective.

So, what did all this cost?  All the software was free and well we scavinged everything except the 5m powered usb cable and the baby monitors.  The total cost of this equipment was £40.  A huge thanks to Nik, EDINA’s small system support officer, who managed to find the software and put the lab together.

Results of UI testing on Version 2

So, you think you have a good, usable project which clearly sets out what the user has to do to get what they want…….. and then you do some user testing.  The UI testing on Version two of the downloader was extremely useful, it pointed out many things that we had missed and now seem just so obvious.  This post will outline the main points that emerged from the testing and will describe how we ran the tests them self. But before we start, it is important to remember that the test revealed many positive things about the interface and users thought it was an improvement over the current system.   This post will now concentrate on the negatives but we shouldn’t be too depressed.

Setup

We decided to run this UI testing in a different configuration than we intend to run the tests with external students.  We wanted to allow our usability expert to be able to guide us through the test so that we would conduct the test using best practice.  Viv was to be the “facilitator” and Addy was the “Observer”.  David was observing everything and would provide feedback between test.

We had 5 candidates who would each run through 5 tasks during a 40-50minute period. We left 30 minutes between each test to allow us time to get feedback from David and to discuss the tests.  As it turned out, the day was quite draining and I wouldn’t recommend trying to do more than 6 candidates in a day.  Your brain will be mush by the end of it and you might not get the most out of the final sessions.

Results

The tests went well and we improved as the day went on thanks to feed back from the usability expert David Hamill.  It was certainly useful to have David facilitate a session so that we could observe him in action.

The participants all said that they thought the interface was easy to use and quite straight forward. However, it was clear that most users struggled with the process of

  1. selecting an area of interest
  2. selecting data products
  3. adding these products to the basket
  4. submitting the order

As the primary role of the interface is to allow users to order data this seems to be an area that will need significant investigation before the next iteration.  Other issues that arose during the sessions include:

  • The “Search and Select An Area” still seemed to confuse users.  Some struggled to see that they had to actually select an area in addition to just navigate to the area using the map
  • Basket Button looks busy and is not prominent enough.
  • Download limits not obvious to the user
  • Users often couldn’t recover from minor mistakes and “looked for a reset button” (technically you don’t need a reset button but the users didn’t know this so this needs addressed)
  • Preview Area in the Basket was not all that useful, the popup covered the map which showed the selection. In addition to previewing the geographical extent selected, this should also preview the data product selected.
  • Make the info buttons easier to browse through
  • Add more information to the “Use Tile Name” section, perhaps investigate how we can integrate this with the view grid function on the right of the map window.
  • Add a clear all button to the basket area.

A detailed report of the main issues that emerged during the user testing can be found in the Version 2 Testing Report(pdf).

The testing session was a success on two levels.  Viv and I learnt a great deal about conducting UI tests by having the usability expert present and we identified some key areas of the interface that were causing users problems.  Most of these are glaringly obvious once they have been pointed out to you, but then that is the point of UI testing i suppose!

Downloader Version 2

Working on the recommendations from the first round of user testing (expert review by the usability expert), Version 2 of the Data Downloader has been developed.

Data Downloader Version 2

The main improvements that have been implimented include:

  • isolate the search and pan/zoom function
  • add select visible area button
  • tidy up the products list
  • increase the prominence of the “add to basket” button
  • add “on hover” functionality to the rubbish bin and the info icons, make them change colour so users notice them.

The next step is to carry out some UI testing on this interface.  Organising external staff / students is time consuming so it has been decided that for the first round of UI testing we will use EDINA staff.  We should be able to replicate the personas from staff as we have people with a mix of geo knowledge.  The other advantage of this is that we will be able to use this round of testing as a dry run and get the usability Expert to observe the tests.  We will then get feedback and tips on running a UI test and hopefully improve our technique so that we get more out of tests with external users.