Lessons Learned

Most of this has been covered in the previous post but it would be good to extract a number of key things that we have learned through the USeD project.

  1. usability can save you time and money during the development of a new application
  2. external consultants can be an effective way of buying in skills if you do not have them “in house”
  3. external consultants can be used to up-skill project staff
  4. however well you think you know your users/sector, engaging with users will always reveal something unexpected
  5. users may be using your service for something other than it’s primary purpose. This may be because they don’t know there is another service that would be better suited, or that your service is the best thing out there that almost does what they want
  6. personas work, even the contrived names such a Explorer Evie or Work-around Walter.  These make it easier to discuss issues and problems with the project team and relate them back to a “real” user.
  7. user testing points out the blindingly obvious which was not obvious until you started testing
  8. you can salvage something from a user test even if it seems to be going badly wrong
  9. you don’t need more than 5-6 user to test an interface, by the 4th person you are uncovering very little in the way of new issues.
  10. write up user tests immediately, important information seeps out of your mind in a short space of time
  11. usability labs need not be expensive
  12. effective documentation makes buy-in from stakeholders much easier.

I think i will leave it there, I may come back to this list and add a couple more items.

Project Recap

With the USeD project drawing to a close it is a good time to recap on what we set out to achieve and how we went about it.

Overview

The USeD project aimed to improve the usability and learnability of a user interface which enabled users to download spatial data from the Digimap service. The current data downloader is a popular service but is perhaps not the most user friendly.  It was designed around the technical constraints of the software of he time (2002) and the requirement that it had to integrate with an inflexible server-side database.

It’s replacement would be designed around the needs of the user but would still have to integrate with a server-side database. However, the new database is more flexible and data extraction far simpler.

The new interface must serve all users from experienced users who know what they want to complete novices who are perhaps less confident working with spatial data.  The interface should therefore be intuitive and learnable, allowing users to explore some of the advanced functionality as they gain confidence. You can read a detailed summary on the  About USeD page.

Persona

The first task was to interview some users and create a set of user personas. 20 users were interviewed and this resulted in 5 distinct personas.  The personas would be used to shape the user requirements and would be used to steer the design of the interface throughout the project.  You can meet our personas on the persona page.

Design Specification

The design specification can be divided into 2 parts; user requirements and a technical specification.  The user requirements were defined from the user requirements.  In the personas we had created a list of “pesron X wants to” and “We would like person X to”.  which made it quite a simple task to put together an initial list of requirements.  We grouped the requirements into:

  1. a user must be able to
  2. a user should be able to
  3. a user could be able to

Grouping the requirements like this gave the engineers an idea of the importance of each requirement which made it easier to justify spending more time implementing small functions that were deemed to be a must. The user requirements documentation can be found here

The technical review focused on the software and libraries that could be used to make the new interface more attractive and interactive. The server side database had already been updated so new tech had to integrate with this.

Prototype or Full Build?

This was a key question in the project. Do we use wire-frame mockups to show different designs or do we use a fully functioning test site?  We went with the fll build as we suspected that there would be issues surrounding the strong visual map window and the expectation of what the used would receive in their order. It was felt that a wire-frame would not address these issues. Building fully functioning test sites involved far more developer time and effort, but it was certainly worth it.

Iterative User Testing

We used task based testing to explore the usability of the new interface.  We started with an expert review from our usability consultant, this caught a number of issues that we had missed. The task based testing engaged with real users. Each user had 45 mins to complete a number of tasks and we tried to have 6 people per session. The interface was then modified between sessions. We ran 3 sessions and saw our “problem points” migrate from the initial screen through the ordering process. This was encouraging as it suggested that users were able to progress further in progressive session before they ran into problems. The user testing is described in detail in a number of post.

Project hand-over

Handover

Handover – Tableatny @ Flickr

At the end of the project we will hand over our findings to the Digimap Service team. The hand-over will be a document that outlines a number of improvements that can be made to the existing interface. Each recommendation will be ranked as either High, Medium or Low.  Each recemondation will address an identified isue in the current interface and will suggest a solution which has been implimented and tested during the USeD project.  Where multiple solutions were trialed, a brief summary of this will be given to justify the final suggestion.

This style of documentation proved to be a very effective way of suggesting improvements to the development team.

 

Version 4 User Testing

The final round of interface testing will follow the same format as the previous sessions.  6 candidates will run through a series of tasks designed to test the usability of the interface.  Once again, candidates were selected from a list of registered Digimap users.  The main findings of this testing session are summarised below:

1.  Text Search

The “No results message boxâ€�  should include the following text “No results found for ‘dasas’ please check the spelling or try an alternative. You can search by Place name, Postcode or Grid refâ€�

The button used to close the search box currently says “Select and Close�  Several users found the term Select confusing. Change this to be “Close� and fix the tool tip.

2. Draw Rectangle

There were a couple of issues with this. The default function should always be pan, however it is possible to have draw rectangle and use coordinates/use time name selected. You should only have 1 select function active at any time.  A user selected a tile through use tilename and then returned to the map and wanted to pan, but their first click cleared the selection as the draw rectangle button was still active.

A wider issue to think about is that does the absence of a pan button confuse users and prevent them from panning? Or, is the current system learnable?  We could improve the help and the tool tip to improve the learnability of this toggle “ON – Draw rectangle to select data. OFF – Pan the mapâ€�

3. Add to basket error

Change txt to say “You have too much 1:10 000 Raster data in your basket, the limit is 200 tiles.  Either reduce your selected area or select another product�

4.  My Account

Further refinements are needed in the My Account section.  The Green Envelope and Blue rubbish bin worked well visually.  These should be the only clickable elements in each row.  Once selected, the bottom grid should populate and if the order botton is pressed this will re-order the complete order. Only if the user checks one of the check boxes will the order be split. So, all the radio buttons should be checked when the bottom grid is populated.

5. Preview

Add in a preview for datasets that are UK wide.  The lack of a preview confused more than one candidate. Tooltip on Preview is not right

6. Use Coordinates

Order of information is now confusing.  Map example was useful but the input boxes should sit below this image.  The OR options can then sit below the input boxes. We also need an error box on “Get coordinates form selected area� to catch when users have no area selected.

7. Use Time Name

Change the text below the txt input box to read “Click the “� icon on the right of the map to view tile grids at anytime.

Summary

Overall, Version 4 user testing was quite encouraging. No major issues were discovered. The feedback from the users was positive and the issues that were identified were generally quite small.  They focus on things that would make the interface clearer and more learnable.

The plan now is to collate the findings from the usability testing and produce a number of recommendations on how to improve the version of the data downloader that is currently live as a beta.  Recommendations will be supported by the evidence gathered during this user testing program.

 

Usability lab on a shoestring budget

Usability testing should be an important part of the development of any user interface. Ensuring that the interface is intuitive and easy to use is critical for its success. However, running usability sessions with real users often strikes fear into project teams. They assume that it will be a costly and time consuming process and will confuse as much as it clarifies the design process.  This article aims to demonstrate how easy it is to set up an effective usability lab on a shoestring budget.

Background

The USeD project aims to improve the interface of a data download website which provides   spatial data to the education sector in the UK.  User testing is an integral part of the USeD project and carrying out iterative assessment exercises will drive the development of the interface.  However, the project budget is quite modest and most of it is assigned for designing and coding the interface.

A discussion with our usability expert on the usefulness of various techniques suggested that most issues with an interface could be identified using quite simple techniques such as task-based exercises. Eye tracking allows testing to focus on very specific problems and it was better to identify general issues first before considering advanced techniques.

User Task Based Testing

Task based testing centers around setting users a series of small, distinct tasks that have been designed to test the functionality of an interface.  The initial tasks should be quite straight forward but later ones can be more involved allowing sessions to explore more advanced aspects of the interface.  Tasks should give the user a clear understanding of what they want to achieve but should allow them the flexibility to explore the interface. This flexibility can reveal how users discover functionality in the interface.  In these testing sessions we have 6 tasks and each session will last up to 45 minutes. Any longer than this and it is probably that the user will tire and loose focus.

So, how can you set up an effective user testing lab in your own office using pretty much “stuff” that you find lying around or “borrow”, temporarily?  The recipe below describes how we went about the task.

Ingredients:

  • 2 rooms, close together or preferably next to each other
  • 2 computers
  • 3 screens
  • 1 web cam
  • 1 mic
  • 1 set of baby monitor
  • A sprinkle of free software
  • 1 really helpful systems support person

First of all, having two rooms is a huge benefit as it means that the only the candidate and the facilitator (person running the test) need to be in the test room. This reduces the stress on the user during the test so that it feels less like a test. A nervous or flustered user will not interact with the interface in a naturally which may affect the results of the tasks.  Having the rooms next together makes things much easier as you can run cables between them.

Test lab

Test Room

  • Set up a computer that is typical of the ones you expect users to access the interface through in normal use. If users are likely to use a laptop or a 15 inch monitor, it would be unfair to run the test on a 21 inch monitor.
  • Set up a web cam that shows the user and the facilitator. This should be set up in an unobtrusive way and is to monitor general body language rather than detailed facial expressions or eye movements.
  • Position the transmitting part of the baby monitor so that it will pick up the conversation
  • Place a microphone dictaphone to capture the conversation between the candidate and the facilitator. This is really just a back up in case parts of the conversation get missed.
  • Make sure you provide some water for the candidates and a bit of chocolate never hurts.

Observation room

The observation lab can be set up in various ways but if you have access to two monitors then this makes things easier.

  • Set up the computer with a “Yâ€� splitter to two monitors. Monitor 1 will show the users screen and monitor 2 will display the webcam feed.  Set the monitors up about 1.5m away from the observers.  This will give them room to make notes and setting the back a bit means that they can easily scan both monitors at the same time without the “watching tennis” effect.
  • The receiving part of the baby monitor will provide the live audio from the other room.
  • Remember some water and chocolate or sugary sweets to keep the observers alert

 

Observation room


Porting the display

To display the users screen, we used some free software called “Zonescreen�. This has to be installed on both computers. Once installed, start ZoneScreen on the machine in the user lab, set this to as the HOST. Make a note of the i.p address. On the computer in the observation room, start ZoneScreen and set the session to REMOTE and enter the i.p address of the computer in the other room. You should now be able to see everything that happens on the user computer.

Webcam

The webcam feed is a little bit trickier. We experimented with broadcasting this across our network, but there was often a lag of up to 20-30seconds which made it very difficult to follow what was actually going on. As we had the luxury of having two rooms next to each other, we were able to connect the webcam to the computer in the observation lab. To do this you need a powered USB extension. The 10m extension we used occasionally failed, possibly as the power attenuated along its length. Replacing this with a 5m cable solved the problem.

Results

This set up worked really well.  The observers were able to see the candidates screen, hear everything that was said.  The webcam was useful to give everything context.  You could tell when the candidate had turned to speak to the facilitator and you could monitor their general body language.  There was only the slightest delay on the screen display feed, but this did not cause a problem. The baby monitors might seem very low tech but they are reliable and effective.

So, what did all this cost?  All the software was free and well we scavinged everything except the 5m powered usb cable and the baby monitors.  The total cost of this equipment was £40.  A huge thanks to Nik, EDINA’s small system support officer, who managed to find the software and put the lab together.

Version 3 User Testing

Repairs

This round of testing concentrates on Version 3 of the New Data Downloader User Interface. The two previous interfaces have undergone an “expert review” and testing with EDINA staff.  Many issues have been identified and solutions have been implemented.  This version of the interface will be tested with actual users.

Finding Users

Finding actual user who could test the interface meant returning to the Digimap user log.  We identified staff and students who had used the current downloader and who were affiliated with an institution in the Edinburgh/Glasgow area. Candidates were divided into three categories:

  1. those that had used the current downloader 5 times or more
  2. those that had used the current downloader less than 5 times
  3. those that had used other digimap services but had not used the current downloader.

We stuck to roughly the same format as the previous user testing session, a series of 5 set tasks that would explore much of the interface and site functionality. Each candidate would have a maximum of 45 minutes to work through the tasks leaving 15 minutes for discussion between the facilitator and the observer. We intended to have 6 candidates starting at 10am giving adequate time, or so we thought, to check the system was working on the day of the test.

We tweaked the tasks slightly, making changes to the way we presented information to the candidates.  This was in response to feedback from the first round of testing with internal EDINA staff.  It is amazing what candidates will extract from your handout that you have not even noticed and sometimes small pieces of information bias or mislead a test. This highlights how important a dry run is before organising sessions with external users.

Lessons

So what did we learn from this session?  Well this can be separated into things that would improve how we ran tests and things to improve the user interface.

About the test:

  1. Set up the lab and test that everything works on the day of the test. Do not assume that just because it worked yesterday it will work today
  2. run through the actual tasks during your test as if you were a candidate. (i tested the new interface on my computer and it was fine, but the first candidate struggled and “things” just didn’t seem right.  A bit of panicking later we discovered that the UI didn’t run quite as intended in Firefox 8. The 15 minutes between candidates gave me time to download Chrome and test that everything was working)
  3. Try not to run a session on the same day as a fire alarm test. (yup, 5 minutes into the first candidates session the fire alarm went off and stayed on for over a minute.  This was a scheduled test and i had completely forgotten about it.  Live and learn.)
  4. Keep calm and carry on – even when everything seems to be going wrong you can still get something out of a session.  If you discover a bug, just get the candidate to move on.  If the interface becomes unusable, or the candidate gets flustered and disengages, just move onto discussing the interface and the process. Ask some questions that dont require them to use the interface such as  “how would they like to interact to get data” or “what similar interfaces have they used, in this case it might be google maps or bing maps. This may allow you to ease them back into the tests.
  5. Dont worry if the candidate seems shy and isnt saying much. Remember to ask them to explain what they are doing and why and they will most probably relax into the situation. A slow, quiet user who takes time to thing can provide insightful feedback, you just have to coax them into thinking out loud.
  6.  “how would they like to interact to get

About the User Interface:

  1. Some users found it difficult to see what button to press on the Basket popup, they were not sure if the appearance of this window indicated that their order had been placed, or if they still had to “do” something to place the order.(closer examination of this issue reveals that some of the confusion may be related to two buttons that were added to the My Basket window between version 2 and version 3. They are the same size and colour as the Place Order button and may dilute the importance of the Order button.)
  2. The “Add to Basket” button was still not prominent enough, users often did not spot it. (we had already tweaked this and in this version, the button was initially grey, then flashed red when items were selected from the product list, and was then blue like the other function buttons.)
  3. All pop-up windows must close when an action button is pressed.  User often left thinking they still have something to do in the pop-up.
  4. Toggle between pan and draw rectangle still not absolutely clear.  Moving the search function out of the select area has helped but more thought needed on how to make this toggle clearer to the user.
  5. My Account section is confusing to users.  Not sure why there are two grids displayed.  Need to think how to make this section clearer to users when it appears but retain the functionality of re-ordering an order or part of an order.
  6. Selecting data through the Bounding Box not clear to all users. Some struggled to interpret the Upper Right X/Y and Lower Left X/Y diagram. (not clear if users struggled with this because they were not initially sure what a bounding box was, or what X/Y were. However, we hope that the interface will be learnable so that novice users will be able to learn how to select data using things like the bounding box through the information presented to them in the UI. The language and terms used in the UI are industry standard terms which are useful to know if you work with spatial data.)
  7. Add a text input box to sit alongside the search button.  A couple of users didn’t initially use the search function and commented that they hadn’t spotted it and were instinctively looking for a text input box where they could add search terms.

This is just a summary of the main points that we extracted from the session. You will find the complete list in the Version 3 User Testing Report (LINK).

Summing Up

Overall, the testing was a success and we have a number of development tasks that we can focus on.  Previous testing had identified issues with the process of selecting an area, selecting data and adding it to the basket. This seems to have been largely resolved and we have seen a migration of the main issues to the Basket and the My Account sections.  This is encouraging and suggests that the initial steps are now more intuitive.

However, some of the changes we implemented after Version 2 seem to have created as many issues as they have solved.  This is particularly clear in the case of the Basket.  Adding two extra buttons (clear basket and add more data) appears to have diluted the importance of the Place Order button.  This is unfortunate as the most important button on the Basket pop-up is the Place Order button.

 

Results of UI testing on Version 2

So, you think you have a good, usable project which clearly sets out what the user has to do to get what they want…….. and then you do some user testing.  The UI testing on Version two of the downloader was extremely useful, it pointed out many things that we had missed and now seem just so obvious.  This post will outline the main points that emerged from the testing and will describe how we ran the tests them self. But before we start, it is important to remember that the test revealed many positive things about the interface and users thought it was an improvement over the current system.   This post will now concentrate on the negatives but we shouldn’t be too depressed.

Setup

We decided to run this UI testing in a different configuration than we intend to run the tests with external students.  We wanted to allow our usability expert to be able to guide us through the test so that we would conduct the test using best practice.  Viv was to be the “facilitator” and Addy was the “Observer”.  David was observing everything and would provide feedback between test.

We had 5 candidates who would each run through 5 tasks during a 40-50minute period. We left 30 minutes between each test to allow us time to get feedback from David and to discuss the tests.  As it turned out, the day was quite draining and I wouldn’t recommend trying to do more than 6 candidates in a day.  Your brain will be mush by the end of it and you might not get the most out of the final sessions.

Results

The tests went well and we improved as the day went on thanks to feed back from the usability expert David Hamill.  It was certainly useful to have David facilitate a session so that we could observe him in action.

The participants all said that they thought the interface was easy to use and quite straight forward. However, it was clear that most users struggled with the process of

  1. selecting an area of interest
  2. selecting data products
  3. adding these products to the basket
  4. submitting the order

As the primary role of the interface is to allow users to order data this seems to be an area that will need significant investigation before the next iteration.  Other issues that arose during the sessions include:

  • The “Search and Select An Area” still seemed to confuse users.  Some struggled to see that they had to actually select an area in addition to just navigate to the area using the map
  • Basket Button looks busy and is not prominent enough.
  • Download limits not obvious to the user
  • Users often couldn’t recover from minor mistakes and “looked for a reset button” (technically you don’t need a reset button but the users didn’t know this so this needs addressed)
  • Preview Area in the Basket was not all that useful, the popup covered the map which showed the selection. In addition to previewing the geographical extent selected, this should also preview the data product selected.
  • Make the info buttons easier to browse through
  • Add more information to the “Use Tile Name” section, perhaps investigate how we can integrate this with the view grid function on the right of the map window.
  • Add a clear all button to the basket area.

A detailed report of the main issues that emerged during the user testing can be found in the Version 2 Testing Report(pdf).

The testing session was a success on two levels.  Viv and I learnt a great deal about conducting UI tests by having the usability expert present and we identified some key areas of the interface that were causing users problems.  Most of these are glaringly obvious once they have been pointed out to you, but then that is the point of UI testing i suppose!

Discussions with CCED (or how I learned to stop worrying about vagueness and love point data)

I met recently with Prof. Stephen Taylor of the University of Reading. Prof. Taylor is one of the investigators of the Clergy of the Church of England (CCED) database project; whose backend development is the responsibility of the Centre for Computing in the Humanities (CCH). Like so many other online historical resources, CCED’s main motivation is to bring things together, in this case information about the CofE clergy between 1540 and 1835, just after which predecessors to the Crockford directory began to appear. There is, however, a certain divergance between what CCED does and what Crockford (simply a list of names of all clergy) does.

CCED started as a list of names, with the relatively straightforward ambition of documenting the name of every ordained  person between those dates, drawing on a wide variety of historical sources. Two things fairly swiftly became apparent: that a digital approach was needed to cope with the sheer amounts of information involved (CD-ROMS  were mooted at first), and that a facility to build queries around location would be critical to the use historians make of the resource. There is therefore clearly scope for considering how Chalice and CCED might complement one another.

Even more importantly however, some of the issues which CCED have come up against in terms of structure have a direct bearing on Chalice’s ambitions.  What was most interesting from Chalice’s point of view was the great complexity which the geographic component contains. It is important to note that there was no definitive list of English ecclesiastical parish names prior to the CCED (crucially, what was needed, was a list which also followed through the history of parishes – e.g. dates of creation, dissolution, merging, etc.), and this is a key thing that CCED provides, and is and of itself of great benefit to the wider community.

Location in CCED is dealt with in two ways: jurisdictional and geographical (see this article). Contrary to popular opinion, which tends to perceive a neat cursus honorum descending from bishop to archdeacon to deacon to incumbent to curate etc, ecclesiastical hierarchies can be very complex. For example, a vicar might be geographically located within a diocese, and yet not report to the bishop responsible for that diocese (‘peculiar’ jurisdictions).

In the geographic sense, location is dealt with in two distinct ways – according to civil geographical areas, such as counties, and according to what might be described as a ‘popular understanding’ of religious geography, treating a diocese as a single geographic unit. Where known, each parish name has a date associated with it, and for the most part this remains constant throughout the period, although where a name has changed there are multiple records (a similar principle to the attestation value of Chalice names, but a rather different approach in terms of structure).

Sub-parish units are a major issue for CCED, and there are interesting comparisons in the issues this throws up for EPNS. Chapelries are a key example: these existed for sure, and are contained with CCED, but it is not always possible to assign them to a geographical footprint (I left my meeting with Prof. Taylor considerably less secure in my convictions about spatial footprints) at least beyond the fact that, almost by definition, they will be been associated with a building. Even then there are problems, however. One example comes from East Greenwich, where there is a record of a curate being appointed, but there is no record of where the chapel is or was, and no visible trace of it today.

Boundaries are particularly problematic. The phenomenon of ‘beating the bounds’ around parishes only occurred where there was an economic or social interest in doing this, e.g. when there was an issue of which jurisdiction tithes should be paid to.  Other factors in determining these boundaries was folk memories, and the memories of the oldest people in the settlement. However, it is the case that, for a significant minority of parishes at least, pre Ordnance Survey there was very little formal/mapped conception of parish boundaries.

For this reason, many researchers consider that mapping based on points is more useful that boundaries. An exception is where boundaries followed natural features such as rivers. This is an important issue for Chalice to consider in its discussion about capturing and marking up natural features: where and how have these featured in the assignation and georeferencing of placenames, and when?

A similar issue is the development of urban centres in the late 18th and 19th centuries: in most cases these underwent rapid changes; and a system of ‘implied boundaries’ reflects the situation then more accurately than hard and fast geolocations.

Despite this, CCED reflects the formal structured entities of the parish lists. Its search facilities are excellent if you wish to search for information about specific parishes whose name(s) you know, but, for example, it would be very difficult to search for ‘parishes in the Thames Valley’; or (another example given in the meeting), to define all parishes within one day’s horse riding distance of Jane Austen’s home, thus allowing the user to explore the clerical circles she would have come into contact with but without knowing the names of the parishes involved.

At sub-parish level, even the structured information is lacking. For example, there remains no definitive list of chapelries.  CCED has ‘created’ chapelries, where the records indicate that one is apparent (the East Greenwich example above is an instance of this). In such cases, a link with Chalice and/or Victoria County History (VCH) could help establish/verify such conjectured associations (posts on Chalice’s discussions with VCH will follow at some point).

When one dips below even the imperfect georeferencing of parishes, there are non-geographic, or semi-geographic, exceptions which need to be dealt with: chaplains of naval vessels are one example; as are cathedrals, which sit outside the system, and indeed maintain heir own systems and hierarchies. In such cases, it is better to pinpoint the things that can be pinpointed, and leave it to the researcher to build their own interpretations around the resulting layers of fuzziness. One simple point layer that could be added to Chalice, for example, is data from Ordnance Survey’s describing the locations churches: a set of simple points which would associate the names of a parish with a particular location, not worrying too much about the amorphous parish boundaries, and yet eminently connectible to the structure of a resource such as CCED.

In the main, the interests that  CCED share with Chalice are ones of structural association with geography. Currently, Chalice relies on point based grid georeferencing, where that has been provided by county editors for the English Place Name Survey. However, the story is clearly far more complex than this.   If placename history is also landscape history, one must also accept that it is also intimately linked to Church history; since the Church exerted so much influence of all areas of life of so much of the period of history in question.

Therefore Chalice should consider two things:

  1. what visual interface/structure would work best to display complex layers of information
  2. how can the existing (limited) georeferencing of EPNS be enhanced by linking to it?

The association of (EPNS, placename, church, CCED, VCH) could allow historians to construct the kind of queries they have not been able to construct before.

CHALICE: Team Formation and Community Engagement

Institutional and Collective Benefits describes who, at an institutional level, is engaged with the CHALICE project. We have three work packages split across four institutions – the Centre for Data Digitisation and Analysis at Queens University Belfast; the Language Technology Group at the School of Informatics, and the EDINA National Datacentre, both at the University of Edinburgh; and the Centre for e-Research at Kings College, London.

The Chalice team page contains more detailed biographical data about the researchers, developers, technicians and project managers involved in putting the project together.

The community engagement aspect of CHALICE will focus on gathering requirements from the academic community on how a linked data gazetteer would be most useful in to historical research projects concerned with different time periods. Semi-structured interviews will be conducted with relevant projects, and the researchers involved will be invited to critically review existing gazetteer services, such as geonames, with a view to identifying how they would could get the most out of such a service. This will apply the same principles, based loosely on the  methodology employed by the TEXTvre project. The project will also seek to engage with providers of services and resources. CHALICE will be able to enhance such resources, but also link them together: in particular the project will collaborate with services funded by JISC to gather evidence as to how these services could make use of the gazetteer .  A rapid analysis of the information gathered will be prepared, and a report published within six months of the project’s start date.

When a first iteration of the system is available, we will revisit these projects, and  develop brief case studies that illustrate practical instances of how the resource can be used.

The evidence base thus produced will substantially inform design of the user interface and the scoping and implementation of its functionalities.

Gathering this information will be the responsibility of project staff at CeRch.

We would love to be more specific about exactly which archive projects will yield to CHALICE at this point; but a lot will depend both on the spatial focus of the gazetteer, and the investigation and outreach during the course of the project. So we have a half dozen candidates in mind right now, but the detailed conversations and investigations will have to wait some months… see the next post on the project plan describing when and how things will happen.