Glasgow – willsnewman (flickr)

Jane Drummond opened the 22nd conference and explained that Pink was the colour of the conference, hence the helpers were wearing Pink T-shirts. This also might explain the pink umbrellas last time GISRUK visited Glasgow.


Mike Worboys keynote gave “A Theoretician’s eye view of GIS Research”. He highlighted the dramatic fall in the proportion of GISRUK papers that covered the theoretical side of GIS. He mused that perhaps we had covered it all; in the end he highlighted several areas where there was still much theory to be discussed, including Geo-Semantics and Geo-Linguistics.

In The Urban Environment session chaired by Peter Halls we saw William Mackaness talk about Spacebook, a system of delivering directions via audio as users encountered various way points on a route. The research found that using Landmarks gave better results than street names in terms of getting someone from A to B.

Phil Bartie, who was a researcher on William Mackness’s paper delved deeper into the issue of Landmarks. He was using images to find out what people identified as landmarks and was analysing them semantically and spatially to distinguish related and unrelated features. His use of Trigrams, or groups of three words may well be a solution to issues with obtaining good search results from EDINA’s place name gazetteer.

Nick Malleson was next talking about using tweets as a proxy for ambient population. Despite the issues with the quality and bias of the Twitter data he found that it still overcame the problems of using census data for city centre population when assessing crime rate. The peaks seen in crime rate for the main shopping and socialising areas disappeared as they were adjusted for the number of people present rather than the number actually living there. Outside of these areas, crime rates were still high in areas where there were social problems as shown by using census data.

The use of Twitter in research continues to raise interesting questions about sampling validity and ethics, this would continue into the second day.


Thursday as the only full day in this years GISRUK program and had 3 parallel sessions.

Spatial Analysis: the best 2 talks being really quite different. Georgios Maniatis discussed error quantification and constraints in environmental sensors.  Georgios’ was looking at sediment movement in rivers, using a local reference frame offered accuracy improvements but added further complications, not least that a significant portion of the signal travel time was through water. Given the small distance from transmitter to receiver, errors could quickly become significant.

The other talk that stood out looked at visualising active spaces of urban utility cyclists. This was given by Seraphim Alvanides on behalf of Godwin Yeboah. Their analysis clearly showed that in certain areas of Newcastle the cycle infrastructure was mis-aligned with where cyclists actually rode. Cyclists used more direct routes to get to work and were more likely to detour on the way home to do shopping or other leisure activities. The fact that the Newcastle Metro which is operated by Deutsche Bahn, do not allow cycles onto their trains. In Continental Europe they seem more amenable to such integration.

Citizen Survey: This session looked really interesting and Neil Harris (Newcastle Uni) kicked off with a very interesting description of a heterogeneous sensor infrastructure which used a schemaless approach.  They had effectively decided to avoid XML and used key value pairs instead.  By using HStore they were able to hook things up with Postgres/PostGIS. The advantage of this approach was that they could integrate new sensors into the D’base easily by just adding key values to the main list. Key values may be seen as old hat by many, but with HStore it gives quite a flexible solution. The work is part of the Science Central project and will effectively pulls together all possible data feeds for the  Science Central to use.

The other presentation of note was by Robin Lovelace (Leeds) who invited discussion around the merits of twitter data in research.  This was not about the ethics around whether users knew what data they were giving-up, but more about the pro’s and con’s of using the data at all.

  • Con – unregulated data, unfocused, loudest voice dominates
  • Pro – diverse, low cost, continuous, responsive

Using Twitter data may raise the following questions

  1. Who made it? – the public
  2. Who owns it? – Twitter

As the discussion progressed it was mentioned that we may be in a golden age for social data, at the moment lots of people are providing information through social media and the social media companies like twitter are allowing us to use the info for free. At some point either the public will realise what info they are providing and seek to limit it, or the government will perhaps do so, and social media companies (who trade on information about users) may restrict access to data or try to charge for it.  Interesting and thought provoking.  If you want to find out more, look at Robin’s presentation and download his code from Twitter to set up a Twitter Listener.

Remote Sensing – I used to do remote sensing so i thought i would go to this session and see what was new. It turns out that it didnt have a huge amount of remote sensing in it, but there was a couple of gems worth mentioning. First is the work that Jonny Huck (University of Lancashire) is doing with sensors.  Jonny presented at last years GISRUK and it was good to see this being used in other people’s research, but the sensor work took a different direction. Jonny made a low-cost (£400) pollution monitoring kit that also monitored VO2 flux of users. This allowed him to crudely calculate risk of pollution.  It was simple kit using motes , smart phones and some basic gis for visualisation. I found it quite refreshing to see a simple approach taking off the shelf kit and running simple experiments. This will hopefully lead to discussion, refinement and some really insightful science.

The other presentation that i enjoyed introduced Whitebox – a geospatial analysis toolkit created by John Lindsay. This is an open-source GIS package and i was stunned by how many tools it had., over 370 at the last count! Possibly most impressive was the Lidar processing tool which will happily open 16Gb of raw lidar point cloud and allow you to process it. I dont know of another open source package which handles lidar.  John likes to call Whitebox open-access rather than open-source. Whats the difference? Well when you open a module there is a “View Code” button. This will open the code that runs the module so that you can see how it works and what it does.

Whitebox is relatively unknown, but John hopes to push it more and the audience suggested using GitHub rather than google code repository and to work towards OSGeo incubation.  It does look good and i have already downloaded it. Oh, it is a Java app so is easy to get working on any platform.

Plenary – I enjoyed the sessions and found something interesting in each one, but the plenaries were a bit underwhelming. Most conferences use the plenaries to bring everyone together and then get the big cheese’s out to show-off cutting edge research or to inspire the audience. The Thursday plenary didn’t seem to do this.

Friday – i was not able to attend on friday, sorry.

gisrukOverall – the conference was well received and i found some of the talks really interesting.  I would have like to be inspired by a keynote at the plenary and I hope that GISRUK 2015 in Leeds will use the plenary to motivate the group to continue to do great GIS research. Thanks to the  local team for pulling the event together, it is never an easy task.  You even managed to get the weather sorted.



GISRUK 2013 – Liverpool

GISRUK 2013 was hosted by The University of Liverpool between April 3rd – 5th.  The conference kicked off with a Keynote presentation from Paul Longley. Paul is well known for his long research career and his excellent text books which form the cornerstone of so many courses in GIS.  The title of his talk was “A name is a statement” and investigated many aspects of geodemographics and genealogy.  Longley highlighted the work that the Wellcome Trust had been involved in that had created a map of Britain’s genetic make-up. From this work you could see how the south of Britain was all very similar but areas such as Orkney were distinctly different to the rest of Britain.  This perhaps relates to the influence of Vikings on the islands genepool (we will forgive him a slip referring to Orkney as the Inner Hebrides). But he pointed out that the patterns reflected the sampling strategy that was used to collect the base data. This was based on 2 premises:

  1. all participants were from rural, semi-rural areas as it was thought that urban medical centres would be busier and more likely to make mistakes taking samples
  2. participants had to be able to trace both sets of grandparents.

A nice study which demonstrates the power in datasets is the Wellcome Trusts DNA database however, care is needed when analyising results as they can be influenced by the sampling strategy.

Longley then moved on to show a number of studies that focused specifically on names.  CASA has been investigating links between names and place for a while.  Pablo Mateos has a number of papers which explore these links (2007 CASA Working Paper, 2011 PLOS One paper) including analysis of naming patterns across 17 countries around the World (2011 PLOS One paper).  For anyone looking for data about names, they should look at ONOMAP (although, Onomap site is down at the time of writing).  An alternative data source might be Twitter.  If you filter the account name to leave only the ones with a proper 1st and 2nd name you can then investigate details about them such as when/where they tweet, now often they tweet and what they tweet about. However there are considerations about the base data that you have to be aware of.  It is not representative of the population as a whole. Twitter users fall into the 20-50 age bracket and users tend to be middle-classed. (I might add that while you can infer ethnicity from the twitter name, it tells you nothing about what the user considers them self to be, i.e British/not British). The final aspect that Longley presented was some initial investigations into what a name can tell you about class and background. For example, Ryan is the 6th most popular name for professional footballers but doesn’t appear in the Top 50 names of Oxford graduates (not sure where these data sets came from). I might add that it only costs £35 to change your name.

Longley also commented on the information that the Census was gathering and questioned if it was still collecting the information that analysists needed.  There is an increasing desire to know about the use of digital tech but this sector develops at such a rate that a 10 year sampling interval would not be appropriate.

Onto the first of the parallel sessions and a brief scan of the program suggested that it would be difficult to decide which stream to attend.   Rather than describe each presentation, I have grouped them together into topic themes.

Stats using R

Perhaps it shouldn’t have been a surprise that there were a lot of papers that used R.  There was a workshop on tuesday on the subject and Liverpool has a strong research group that use R as their stats tool of choice. Chris Brunsdon (Liverpool) outlined how easy it was to access data through API’s from R. The other nugget from Chris was that you could use R and Shiny to make web services, making your data interactive and allowing the user to perform some analysis over the web.  Certainly will be looking into these a bit further.

Mobile Mapping

There were a few presentations on mobile mapping apps.  Michalis Vitos (UCL) had created a pictorial based system that allowed illiterate users to record evidence of illegal logging in the Congo Basin. The app was stripped back to make it intuitive and easy for someone who may not be able to read or write to use.  Distances were estimated in terms of football pitches.  Michalis had used ODK Collect to build his app and initial tests in the field suggested that users could collect useful data through it.

EDINA showcased it’s new data collection app Fieldtrip GB which allows users to design and deploy data forms that meet the needs of their research.  Fieldtrip GB is free and is available for both iPhone and Android. Ben Butchart didn’t dwell much on the functionality of the app, choosing to explain some of the technical issues that had to be overcome by the development team.


SpaceBook is a project that William Mackaness and Phil Bartie (University of Edinburgh) are involved in.  Essentially the idea is to provide information to a user about what they can see or about how to get to a place using visual aids and human interpretable instructions (target is to the left of the Scots Monument which is the tall tower directly ahead). The app adopts a speech based approach ensuring that the users hands are free to do other things such as take pictures.  The app has to make some assumptions to extract the users orientation but it would be interesting to try it out. Certainly, Edinburgh’s hilly terrain lends itself to such an app as the skyline changes as you rise and fall across the City.


Empires decline – Pedro Miguel Cruz

The second Keynote was given by Jason Dykes of City University London. Jason is well known for taking a dataset and presenting it in a novel way.  With an hour to fill, Jason took us through some of the more interesting projects that he has been working on and, as usual, he ran live demo’s changing parameters and re-generating the visualisations on-the-fly.  The first visualisation was from Pedro Cruz and it showed the decline of the Empires through time.  It starts with 4 large “blobs” and these slowly fragment into countries until we have a semi-recognisable world map. This would be great as a teaching aid in schools.

London Bike Hire Scheme – Map view

Other visualisations that are worth a look include the BikeGrid which takes feeds from the London Bike Scheme and allows you to view them as in a standard geographic layout and then a grid. The example for London works well as the river retains an element of geographic separation when the gridview is used.  This idea of being able to switch between geographic and non-geographic views can be taken further if you switch to a relationship view, where cluster of similar things are formed. In one example you could vary the amount of geographic control was exerted on the view and see whether or not geography was the reason for the relationship (i cant find the link to this at the moment).

London Bike Hire Scheme – Grid View

All the wiz-bang shown in Jason’s presentation is linked from his webpage. In addition, there are links to the giCentre’s utilities which should help anyone who is interested in using packages such as “Processing” to visualise data.

Other interesting things of note

There were a few other items that are worth mentioning that perhaps dont fit neatly into my hashed themes. One of these is, from Jonathon Huck, Lancaster University.  This site allows people to reate simple questionairs and then they can interact with a map to convey how strongly they feel about the topic using a “spray can” technique. The service is free and allows users to perform basic fuzzy geographic analysis through participatory science. The technique seems to lend itself well to applications such as locating new windfarms, or perhaps monitoring anti-social behavior in a neighbourhood.

Candela Sanchez discussed the Map Kibera project which saw slum communities map their neighbourhoods. Candela applied a similar approach to map the Shankar Maharaj Slum in India.  Candela looked at how easy it was to impliment the Kibera formula and what possible issues it threw up. The issues related to power, the local knowledge that slum dwellers had and the possibility that once mapped, the land could be “valued” and residents taxed or moved on by the landlords. Community buy-in and involvement throughout such projects is critical if they are to benefit the community itself.

Candela received the best “Open” paper award from OSGeo.  Phil Bartie won the overall best paper award.  GISRUK 2014 will take place in Glasgow.



GISRUK 2012 – Wednesday

GISRUK 2012 was held in Lancaster, hosted by Lancaster University. The conference aimed to cover a broad range of subjects including Environmental Geoinformatics, Open GIS, Social GIS, Landscape Visibility and Visualisation and Remote Sensing. In addition to the traditional format, this years event celebrated the career of Stan Openshaw, a pioneer in the field of computational statistics and a driving force in the early days of GIS.


The conference kicked off with a Keynote from Professor Peter Atkinson of the University of Southampton.  This demonstrated the use of remotely sensed data to conduct spatial and temporal monitoring of environmental properties. Landsat data provides researchers with 40 years of data making it possible to track longer term changes. Peter gave two use case examples:

  1. River channel monitoring on the Ganges. The Ganges forms the International boundary between India and Bangladesh, understanding channel migration is extremely important for both countries.  The influece of man-made structures, such as barrages to divert water to Calcutta, can have a measurable effect on the river channel. Barrages were found to stabalise the migrating channel
  2. Monitoring regional phenology. Studying the biomass of vegetation is tricky but using “greenness” as an indicator provides a useful measure. Greenness can then be calculated for large areas, up to continent scale.  Peter gave an example where MODIS and MERIS data had been used to calculate the greenness of India. Analysis at this scale and resolution reveals patterns and regional variation such as the apparent “double greening” of the western Ganges basin which would allow farmers to have two harvests for some crops.

However, these monitoring methods are not without their challenges and limitations.  Remote sensing data provides continuous data based on a regular grid.  Ground based measurements are sparse and may not tie in, spatially or temporally, with the remotely sensed data. Ground based phenology measurements can be derived using a number of methods making it difficult to make comparisons.  A possible solution would be to adopt a crowd-sourcing technique where data is collected and submitted from enthusiasts in the field. This would certainly result in better spatial distributions of ground based measurements, but would the resulting data be reliable? Automatically calculating the greening from web-cams is currently being trialed.

The first session was then brought to a close with two talks on the use of terrestrial lidar. Andrew Bell (Queens University, Belfast) was investigating the use of terrestrial LiDAR for monitoring slopes.  DEMs were created from the scans and this was used to detect changes in slope, roughness and surface.  The project aims to create a probability map to identify surface that are likely to fail and cause a hazard to the public.  Andrew’s team will soon receive some new airbourne LiDAR data, however I feel that if this technique is to be useful to the highways agency, the LiDAR would have to mounted on a car as cost and repeatability would be two key drivers.  Andrew pointed out that this would reduce the accuracy of the data but perhaps such a reduction would be acceptable and change would still be detectable.

Neil Slatcher’s (Lancaster University) paper discussed the importance of calculating the optimum location to depoly a terrestrial scanner.  Neil’s research concentrated on lava flows which meant the landscape was rugged, some areas were inaccessible and the target was dynamic and had to be scanned in a relatively short period of time. When a target cannot be fully covered by just one scan analysis of the best positions to give complete coverage is needed.  Further, with a 10Hz scanner you could make 10 measurements per second which seems quick but a dense grid can result in scan times in excess of 3hrs.  By sub-dividing the scan into smaller scan windows that are centred over the target you can significantly reduce the size of the grid and the number of measurements required and hence the time it takes to acquire the data. This method had reduced scan times from 3 hrs to 1hr15mins.

The final session of the day had two parallel sessions, one on Mining Social Media and the other on Spatial Statistics.  Both interesting subjects but i opted to attend the Socail Media strand.

  • Lex Comber (University of Leicester) gave a presentation on Exploring the geographies in social networks.  This highlighted that there are many methods for identifying clusters or communities in social data but that the methods for understanding what a community means are still quite primitive.
  • Jonny Huck (Lancaster University) presented on Geocoding for social networking of social data.  This focused on the Royal Wedding as it was an announced event that was expected to generate traffic on social media allowing the team to plan rather than react. They found that less than 1% of tweets contained explicit location information. You could parse the tweets to extract geographic information but this introduced considerable uncertainty.  Another option was to use the location info in users profiles and assume they were at that location.  The research looked at defining levels of detail, so Lancaster Uni  Campus would be defined as Lancaster University Campus / Lancaster/Lancashire / England /UK.  By geocoding the tweets at as many levels of detail as possible you could then run analysis at the appropriate level.  What you had to be careful of was creating false hot-spots at the centroids of each country.
  • Omar Chaudhry (University of Edinburgh) explained the difficulties in Modelling Confidence in Extraction of Place Tags from Flickr.  Using a test case of Edinburgh they tried to use Flickr tags to define the dominant feature of grid cell covering central Edinburgh.  Issues arose when many photo’s were tagged for a personal event such as a wedding and efforts were made to reduce the impact of these events. Weighting the importance of the tag against the number of users who used it, rather than the absolute number of times it was used seemed to improve results. There was still the issue of tags relating to what the photo was of, rather than were it was taken.  Large features such as the Castle and Arthur’s Seat dominated the coarser grids as they are visible over a wide area.
  • Andy Turner and Nick Malleson (University of Leeds) gave a double header as they explined Applying geographical clustering methods to analyse geo-located open micro-blog posts: a case study of tweets around Leeds.  The research showed just how much information you could extract from location information in tweets, almost giving you a socio-economic profile of the people. There was some interesting discussion around the ethics of this, specifically in elation to the data protection act.  This clearly states that you can use the data for the purpose that it was collected for.  Would this research/profiling be considered what the original data had been collected for?  Probably not.  However, that was part of the research, to see what you could do and hence what companies could do if social media sites such as twitter start to allow commercial organisations to access your personal info. For more information on this look at this paper, or check out Nick’s Blog
  • One paper that was suggested as a good read on relating tweets to place and space was Tweets from Justin Bieber’s heart: the dynamics of the location field in user profiles.

I will post a summary of Thursday as soon as I can.