Version 3 User Testing

Repairs

This round of testing concentrates on Version 3 of the New Data Downloader User Interface. The two previous interfaces have undergone an “expert review” and testing with EDINA staff.  Many issues have been identified and solutions have been implemented.  This version of the interface will be tested with actual users.

Finding Users

Finding actual user who could test the interface meant returning to the Digimap user log.  We identified staff and students who had used the current downloader and who were affiliated with an institution in the Edinburgh/Glasgow area. Candidates were divided into three categories:

  1. those that had used the current downloader 5 times or more
  2. those that had used the current downloader less than 5 times
  3. those that had used other digimap services but had not used the current downloader.

We stuck to roughly the same format as the previous user testing session, a series of 5 set tasks that would explore much of the interface and site functionality. Each candidate would have a maximum of 45 minutes to work through the tasks leaving 15 minutes for discussion between the facilitator and the observer. We intended to have 6 candidates starting at 10am giving adequate time, or so we thought, to check the system was working on the day of the test.

We tweaked the tasks slightly, making changes to the way we presented information to the candidates.  This was in response to feedback from the first round of testing with internal EDINA staff.  It is amazing what candidates will extract from your handout that you have not even noticed and sometimes small pieces of information bias or mislead a test. This highlights how important a dry run is before organising sessions with external users.

Lessons

So what did we learn from this session?  Well this can be separated into things that would improve how we ran tests and things to improve the user interface.

About the test:

  1. Set up the lab and test that everything works on the day of the test. Do not assume that just because it worked yesterday it will work today
  2. run through the actual tasks during your test as if you were a candidate. (i tested the new interface on my computer and it was fine, but the first candidate struggled and “things” just didn’t seem right.  A bit of panicking later we discovered that the UI didn’t run quite as intended in Firefox 8. The 15 minutes between candidates gave me time to download Chrome and test that everything was working)
  3. Try not to run a session on the same day as a fire alarm test. (yup, 5 minutes into the first candidates session the fire alarm went off and stayed on for over a minute.  This was a scheduled test and i had completely forgotten about it.  Live and learn.)
  4. Keep calm and carry on – even when everything seems to be going wrong you can still get something out of a session.  If you discover a bug, just get the candidate to move on.  If the interface becomes unusable, or the candidate gets flustered and disengages, just move onto discussing the interface and the process. Ask some questions that dont require them to use the interface such as  “how would they like to interact to get data” or “what similar interfaces have they used, in this case it might be google maps or bing maps. This may allow you to ease them back into the tests.
  5. Dont worry if the candidate seems shy and isnt saying much. Remember to ask them to explain what they are doing and why and they will most probably relax into the situation. A slow, quiet user who takes time to thing can provide insightful feedback, you just have to coax them into thinking out loud.
  6.  “how would they like to interact to get

About the User Interface:

  1. Some users found it difficult to see what button to press on the Basket popup, they were not sure if the appearance of this window indicated that their order had been placed, or if they still had to “do” something to place the order.(closer examination of this issue reveals that some of the confusion may be related to two buttons that were added to the My Basket window between version 2 and version 3. They are the same size and colour as the Place Order button and may dilute the importance of the Order button.)
  2. The “Add to Basket” button was still not prominent enough, users often did not spot it. (we had already tweaked this and in this version, the button was initially grey, then flashed red when items were selected from the product list, and was then blue like the other function buttons.)
  3. All pop-up windows must close when an action button is pressed.  User often left thinking they still have something to do in the pop-up.
  4. Toggle between pan and draw rectangle still not absolutely clear.  Moving the search function out of the select area has helped but more thought needed on how to make this toggle clearer to the user.
  5. My Account section is confusing to users.  Not sure why there are two grids displayed.  Need to think how to make this section clearer to users when it appears but retain the functionality of re-ordering an order or part of an order.
  6. Selecting data through the Bounding Box not clear to all users. Some struggled to interpret the Upper Right X/Y and Lower Left X/Y diagram. (not clear if users struggled with this because they were not initially sure what a bounding box was, or what X/Y were. However, we hope that the interface will be learnable so that novice users will be able to learn how to select data using things like the bounding box through the information presented to them in the UI. The language and terms used in the UI are industry standard terms which are useful to know if you work with spatial data.)
  7. Add a text input box to sit alongside the search button.  A couple of users didn’t initially use the search function and commented that they hadn’t spotted it and were instinctively looking for a text input box where they could add search terms.

This is just a summary of the main points that we extracted from the session. You will find the complete list in the Version 3 User Testing Report (LINK).

Summing Up

Overall, the testing was a success and we have a number of development tasks that we can focus on.  Previous testing had identified issues with the process of selecting an area, selecting data and adding it to the basket. This seems to have been largely resolved and we have seen a migration of the main issues to the Basket and the My Account sections.  This is encouraging and suggests that the initial steps are now more intuitive.

However, some of the changes we implemented after Version 2 seem to have created as many issues as they have solved.  This is particularly clear in the case of the Basket.  Adding two extra buttons (clear basket and add more data) appears to have diluted the importance of the Place Order button.  This is unfortunate as the most important button on the Basket pop-up is the Place Order button.

 

Comments are closed.