Aerial Digimap data: The mapping service where it’s always sunny

The latest Digimap addition is aerial photo images, covering the whole of Great Britain to 25cm precision. The University of Edinburgh has just subscribed to Aerial Digimap, so the great news is that staff and students can now access these wonderful images, overlay them onto other map layers, and combine them with building height and topology data to make amazing and beautiful three-dimensional maps of the whole of Britain.

Map created using Aerial Digimap

I’ve used Aerial Digimap to label the entrance to Argyle House, home of EDINA. © GetMapping and University of Edinburgh. This map contains OS data.

Digimap is a visual interface that allows users to explore, annotate and download mapping data covering the whole of Great Britain.* Digimap’s historical map data go back as far as the 1840s, while geological, marine and environmental data have been available for some time.

It’s strikingly sunny in the images of Edinburgh. The Digimap team confirmed this is a UK-wide phenomenon: “Aerial Photography can only be captured on clear days, so it’s always sunny in Aerial Roam!”

You can watch a guided tour of Aerial Digimap’s features and a demonstration of how to make the most of them by EDINA’s Ian Holmes in this recently recorded webinar: 

Click here to view the embedded video.


To get started with Aerial Digimap, login with your EASE account at:

* For mapping data covering Northern Ireland, please see Ordnance Survey of Northern Ireland.


Pauline Ward is a Research Data Service Assistant based at EDINA, supporting staff and students at the University of Edinburgh


eLearning@ed/LTW Monthly Showcase #2: Open

Today we have our second eLearning@ed/LTW Showcase and Network event. I’m liveblogging so, as usual, corrections and updates are welcome. 
Jo Spiller is welcoming us along and introducing our first speaker…
Dr. Chris Harlow – “Using WordPress and Wikipedia in Undergraduate Medical & Honours Teaching: Creating outward facing OERs”
I’m just going to briefly tell you about some novel ways of teaching medical students and undergraduate biomedical students using WordPress and platforms like Wikipedia. So I will be talking about our use of WordPress websites in the MBChB curriculum. Then I’ll tell you about how we’ve used the same model in Reproductive Biology Honours. And then how we are using Wikipedia in Reproductive Biology courses.
We use WordPress websites in the MBChB curriculum during Year 2 student selected components. Students work in groups of 6 to 9 with a facilitator. They work with a provided WordPress template – the idea being that the focus is on the content rather than the look and feel. In the first semester the topics are chosen by the group’s facilitator. In semestor two the topics and facilitators are selected by the students.
So, looking at example websites you can see that the students have created rich websites, with content, appendices. It’s all produced online, marked online and assessed online. And once that has happened the sites are made available on the web as open educational resources that anyone can explore and use here:
The students don’t have any problem at all building these websites and they create these wonderful resources that others can use.
In terms of assessing these resources there is a 50% group mark on the website by an independent marker, a 25% group mark on the website from a facilitator, and (at the students request) a 25% individual mark on student performance and contribution which is also given by the facilitator.
In terms of how we have used this model with Reproductive Biology Honours it is a similar idea. We have 4-6 students per group. This work counts for 30% of their Semester 1 course “Reproductive Systems” marks, and assessment is along the same lines as the MBChB. Again, we can view examples here (e.g. “The Quest for Artificial Gametes”. Worth noting that there is a maximum word count of 6000 words (excluding Appendices).
So, now onto the Wikipedia idea. This was something which Mark Wetton encouraged me to do. Students are often told not to use or rely on Wikipedia but, speaking a biomedical scientist, I use it all the time. You have to use it judiciously but it can be an invaluable tool for engaging with unfamiliar terminology or concepts.
The context for the Wikipedia work is that we have 29 Reproductive Biology Honours stduents (50% Biomedical Sciences, 50% intercalculating medics), and they are split into groups of 4-5 students/groups. We did this in Semester 1, week 1, as part of the core “Research Skills in Reproductive Biology”. And we benefited from expert staff including two Wikipedians in Residence (at different Scottish organisations), a librarian, and a learning, teaching and web colleague.
So the students had an introdution to Wikipedia, then some literature searching examples. We went on to groupwprl sesssions to find papers on particular topics, looking for differences in definitions, spellings, terminology. We discussed findings. This led onto groupwork where each group defined their own aspect to research. And from there they looked to create Wikipedia edits/pages.
The groups really valued trying out different library resources and search engines, and seeing the varying content that was returned by them.
The students then, in the following week, developed their Wikipedia editing skills so that they could combine their work into a new page for Neuroangiogenesis. Getting that online in an afternoon was increadibly exciting. And actually that page was high in the search rankings immediately. Looking at the traffic statistics that page seemed to be getting 3 hits per day – a lot more reads than the papers I’ve published!
So, we will run the exercise again with our new students. I’ve already identified some terms which are not already out there on Wikipedia. This time we’ll be looking to add to or improve High Grade Serious Carcinoma, and Fetal Programming. But we have further terms that need more work.
Q1) Did anyone edit the page after the students were finished?
A1) A number of small corrections and one querying of whether a PhD thesis was a suitable reference – whether a primary or secondary reference. What needs done more than anything else is building more links into that page from other pages.
Q2) With the WordPress blogs you presumably want some QA as these are becoming OERs. What would happen if a project got, say, a low C.
A2) Happily that hasn’t happened yet. That would be down to the tutor I think… But I think people would be quite forgiving of undergraduate work, which it is clearly presented at.
Q3) Did you consider peer marking?
A3) An interesting question. Students are concerned that there are peers in their groups who do not contribute equally, or let peers carry them.
Comment) There is a tool called PeerAim where peer input weights the marks of students.
Q3) Do all of those blog projects have the same model? I’m sure I saw something on peer marking?
A3) There is peer feedback but not peer marking at present.
Dr. Anouk Lang – “Structuring Data in the Humanities Classroom: Mapping literary texts using open geodata”
I am a digital humanities scholar in the school of Languages and Linguistics. One of the courses I teach is digital humanities for literature, which is a lovely class and I’m going to talk about projects in that course.
The first MSc project the students looked at was to explore Robert Louis Stevenson’s The Dynamiter. Although we were mapping the texts but the key aim was to understand who wrote what part of the text.
So the reason we use mapping in this course is because these are brilliant analytical students but they are not used to working with structured data, and this is an opportunity to do this. So, using CartoDB – a brilliant tool that will draw data from Google Sheets – they needed to identify locations in the text but I also asked students to give texts an “emotion rating”. That is a rating of intensity of emotion based on the work of Ian Gregory – spatial historian who has worked with Lakes data on the emotional intensity of these texts.
So, the students build this database by hand. And then loaded into CartoDB you get all sorts of nice ways to visualise the data. So, looking at a map of London you can see where the story occurs. The Dynamiter is a very weird text with a central story in London but side stories about the planting of bombs, which is kind of played as comedy. The view I’m showing here is a heatmap. So for this text you can see the scope of the text. Robert Louis Stevenson was British, but his wife was American, and you see that this book brings in American references, including unexpected places like Utah.
So, within CartoDB you can try different ways to display your data. You can view a “Torque Map” that shows chronology of mentions – for this text, which is a short story, that isn’t the most helpful perhaps.
Now we do get issues of anachronisms. OpenStreetMap – on which CartoDB is based – is a contemporary map and the geography and locations on the map changes over time. And so another open data source was hugely useful in this project. Over at the National Library of Scotland there is a wonderful maps librarian called Chris Fleet who has made huge numbers of historical maps available not only as scanned images but as map tiles through a Historical Open Maps API, so you can zoom into detailed historical maps. That means that mapping a text from, say, the late 19th Century, it’s incredibly useful to view a contemporaneous map with the text.
You can view the Robert Louis Stevenson map here:
So, moving to this year’s project… We have been looking at Jean Rhys. Rhys was a white Creole born in the Dominican Republic who lived mainly in Europe. She is a really located author with place important to her work. For this project, rather than hand coding texts, I used the wonderful wonderful Edinburgh Geoparser ( – a tool I recommend and a new version is imminent from Clare Grover and colleagues in LTG, Informatics.
So, the Geoparser goes through the text and picks out text that looks like places, then tells you which it things is the most likely location for that place – based on aspects like nearby words in the text etc. That produces XML and Clare has created me an XSLT Stylesheet, so all the students have had to do is to manually clean up that data. The GeoParser gives you GeoNames reference that enables you to check latitude and longitude. Now this sort of data cleaning, the concept of gazeteers, these are bread and butter tools of the digital humanities. These are tools which are very unfamiliar to many of us working in the humanities. This is open, shared, and the opposite of the scholar secretly working in the librarian.
We do websites in class to benefit from that publicness – and the meaning of public scholarship. When students are doing work in public they really rise to the challenge. They know it will connect to their real world identities. I insist students sow their name, their information, their image because this is part of their digital scholarly identities. I want people who Google them to find this lovely site with it’s scholarship.
So, for our Jean Rhys work I will show you a mock up preview of our data. One of the great things about visualising your data in these ways is that you can spot errors in your data. So, for instance, checking a point in Canada we see that the Geoparser has picked Halifax Nova Scotia when the text indicates Halifax in England. When I raised this issue in class today the student got a wee bit embarrassed and made immediate changes… Which again is kind of perk of work in public.
Next week my students will be trying out QGIS  with Tom Armitage of EDINA, that’s a full on GIS system so that will be really exciting.
For me there are real pedagogical benefits of these tools. Students have to really think hard about structuring their data, which is really important. As humanists we have to put our data in our work into computational form. Taking this kind of class means they are more questioning of data, of what it means, of what accuracy is. They are critically engaged with data and they are prepared to collaborate in a gentle kind of way. They also get to think about place in a literary sense, in a way they haven’t before.
We like to think that we have it all figured out in terms of understanding place in literature. But when you put a text into a spreadsheet you really have to understand what is being said about place in a whole different way than a close reading. So, if you take a sentence like: “He found them a hotel in Rue Lamartine, near Gard du Nord, in Monmatre”. Is that one location or three? The Edinburgh GeoParser maps two points but not Rue Lamartine… So you have to use Google maps for that… And is the accuracy correct. And you have to discuss if those two map points are distorting. The discussion there is more rich than any other discussion you would have around close reading. We are so confident about close readings… We assume it as a research method… This is a different way to close read… To shoe horn into a different structure.
So, I really like Michel De Certeau’s “Spatial stories” in The practice of everyday life (De Certeau 1984), where he talks about structured space and the ambiguous realities of use and engagement in that space. And that’s what that Rue LaMartine type example is all about.
Q1) What about looking at distance between points, how length of discussion varies in comparison to real distance
A1) That’s an interesting thing. And that CartoDB Torque display is crude but exciting to me – a great way to explore that sort of question.
OER as Assessment – Stuart Nichol, LTW
I’m going to be talking about OER as Assessment from a students perspective. I study part time on the MSc in Digital Education and a few years ago I took a module called Digital Futures for Learning, a course co-created by participants and where assessment is built around developing an Open Educational Resource. The purpose is to “facilitate learning for the whole group”. This requires a pedagogical approach (to running the module) which is quite structured to enable that flexibility.
So, for this course, the assessment structure is 30% position paper (basis of content for the OER), then 40% of mark for the OER (30%peer-assessed and tutor moderated / 10% self assessed), and then the final 30% of the marks come from an analysis paper that reflects on the peer assessment. You could then resubmit the OER along with that paper reflecting on that process.
I took this module a few years ago, before the University’s adoption of an open educational resource policy, but I was really interested in this. So I ended up building a course on Open Accrediation, and Open Badges, using weebly:
This was really useful as a route to learn about Open Educational Resources generally but that artefact has also become part of my professional portfolio now. It’s a really different type of assignment and experience. And, looking at my stats from this site I can see it is still in use, still getting hits. And Hamish (Macleod) points to that course in his Game Based Learning module now. My contact information is on that site and I get tweets and feedback about the resource which is great. It is such a different experience to the traditional essay type idea. And, as a learning technologist, this was quite an authentic experience. The course structure and process felt like professional practice.
This type of process, and use of open assessment, is in use elsewhere. In Geosciences there are undergraduate students working with local schools and preparing open educational resources around that. There are other courses too. We support that with advice on copyright and licensing. There are also real opportunities for this in the SLICCs (Student Led Individually Created Courses). If you are considering going down this route then there is support at the University from the IS OER Service – we have a workshop at KB on 3rd March. We also have the new Open.Ed website, about Open Educational Resources which has information on workshops, guidance, and showcases of University work as well as blogs from practitioners. And we now have an approved OER policy for learning and teaching.
In that new OER Policy and how that relates to assessment, and we are clear that OERs are created by both staff and students.
And finally, fresh from the ILW Editathon this week, Ewan MacAndrew, our new Wikimedian in residence, will introduce us to Histropedia (Interactive timelines for Wikipedia: and run through a practical introduction to Wikipedia editing.


Working with the Royal Botanic Gardens Edinburgh

Palm_House,_Royal_Botanic_Garden_EdinburghOver the last few weeks we have been working in partnership with the Royal Botanic Gardens Edinburgh, who hold an excellent collection of County Surveys as part of their impressive collections. The RBGE is currently in the process of having their rare books comprehensively catalogued by the Rare Book Cataloguer from the Centre for Research Collections (CRC) at the University of Edinburgh, and we are pleased to be able to contribute to this process by assisting in the cataloguing of the County Survey holdings. Once they are complete, we hope that these new electronic records will from the basis of another data set for our online demonstrator.

The RBGE also has state of the art equipment and digitisation specialists in house: although they are currently involved in an extensive project to digitise specimens from the internationally renowned herbarium, staff have generously shared their knowledge and allowed us to use their equipment to digitise a few of the surveys. We are pleased to report this work is going very well and we should be able to make the digitised copies available soon, so watch this space.

NHS Health Atlas – risk and disease


Risk of Melanoma – from BBC and Imperial College London

NHS Choices have published a health atlas that maps the risk of a number of illnesses across England and Wales. The research behind the map, which compiles data from over 25 years, was carried out by Imperial College London.

The Data was collected between 1985 and 2009 from the ONS and from cancer registers. The 11 diseases and conditions that have been mapped are:

  • Lung cancer
  • Breast cancer
  • Prostate cancer
  • Malignant melanoma
  • Bladder cancer
  • Mesothelioma
  • Liver cancer
  • Coronary heart disease
  • COPD mortality
  • Kidney disease
  • Stillbirth
  • Low birth weight

A cursory glance at the map will reveal expected trends such as the risk of skin cancer being higher in the South-East where there is more sunshine and higher risks of lung cancer coinciding with larger cities where airborne pollutants are more likely. However, i am sure that there are other interesting observations that could be extracted if you have time to explore the data.

You can explore some of the data on the NHS Choices website and read about it on the Independent and the BBC website.

I will try to find the data and post it in ShareGeo, but until then you might want to explore this dataset that shows death related to air pollution.  I really need to get some happier datasets into ShareGeo!

Fieldtrip GB – Mapserver 6.2 Mask Layers

By Fiona Hemsley-Flint (GIS Engineer)

Whilst developing the background mapping for the Fieldtrip GB App, it became clear that there was going to have to be some cartographic compromises between urban and rural areas at larger scales; Since we were restricted to using OS Open products, we had a choice between Streetview and Vector Map District (VMD) – Streetview works nicely in urban environments, but not so much in rural areas, where VMD works best  (with the addition of some nice EDINA–crafted relief mapping) . This contrast can be seen in images below.


Streetview (L) and Vector Map District (R) maps in an urban area.


Streetview (L) and Vector Map District (R) maps in a rural area.

In an off-the-cuff comment, Ben set me a challenge – “It would be good if we could have the Streetview maps in urban areas, and VMD maps in rural areas “.

I laughed.

Since these products are continuous over the whole of the country, I didn’t see how we could have two different maps showing at the same time.

Then, because I like a challenge, I thought about it some more and found that the newer versions of MapServer (from 6.2) support something called “Mask Layersâ€�  – where one layer is only displayed in places where it intersects another layer.

I realised if I could define something that constitutes an ‘Urban’ area, then I could create a mask layer of these, which could then be used to only display the Streetview mapping in those areas, and all other areas could display a different map – in this case Vector Map District (we used the beta product although are currently updating to the latest version).

I used the Strategi ‘Large Urban Areas’ classification as my means of defining an ‘Urban’ area – with a buffer to take into account suburbia and differences in scale between Strategi and Streetview products.

The resulting set of layers (simplified!) looks a bit like this:

masking example

Using Mask layers in MapServer 6.2 to display only certain parts of a raster image.

Although this doesn’t necessarily look very pretty in the borders between the two products, I feel that the overall result meets the challenge – in urban areas it is now possible to view street names and building details, and in rural areas, contours and other topographic features are more visible. This hopefully provides a flexibility  for users on different types of field trips to successfully implement the background mapping.

Here’s a snippet of the mapfile showing the implementation of the masking, in case you’re really keen…

#VMD_layer(s) defined before mask




#Streetview mask layer


NAME “Streetview_Mask”




#Data comes from a shapefile (polygons of urban areas only):

DATA “streetview_mask”






NAME “Streetview”




#Data is a series of tiff files, location stored in a tileindex

TYPE Raster


TILEINDEX “streetview.shp”

TILEITEM “Location”

#*****The important bit – setting the mask for the layer*****

MASK “Streetview_Mask”



Fieldtrip GB App

First of all – apologies for this blog going quiet for so long. Due to resource issues its been hard to keep up with documenting our activities. All the same we have been quietly busy continuing work on geo mobile activity and I’m please to announce that we have now releases our Fieldtrip GB app in the Google Play Store  


We expect the iOS version to go through the Apple App Store  in a few weeks.

Over the next few weeks I’ll be posting to blog with details of how we implemented this app and why we choose certain technologies and solutions.

Hopefully this will prove a useful resource to the community out there trying to do similar things.

A brief summary. The app uses PhoneGap and OpenLayers so is largely using HTML5 web technologies but wrapped up in a native framework. The unique mapping uses OS Open data including Strategi , Vector Map District  and Land-Form PANORAMA mashed together with path and cycleway data from OpenStreetMap and Natural England.


SplashMaps – tough and usable maps

Maps are great, we use them to help us navigate around spaces. These spaces tend to be outdoors and the weather is not always conducive to unfurling a massive sheet of paper. In my opinion, maps need to be tough. That is why it is nice to see a start-up that is trying to produce maps on fabric, a technique that was used widely by the armed forces during WWII.

I don’t usually give shout-outs to ventures like this, but this one is using Open Source Data and is trying to make nice, usable maps printed on fabric.  I hope they raise the money they need to get going, i would certainly like to have a fabric map in my collection. If you are interested, please read the overview below and click the link to the SpalshMaps project page on Kickstarter.

SplashMaps – pic courtesy of SplashMaps

“A SplashMap is a map printed onto a fabric, and like its inspiration (the escape and evasion silk maps used in the second WW and distributed around the continent in Monopoly boxes) they are light-weight, durable, washable, wearable and ideal for the “real” outdoors of mud, wind, snow and rain… all the conditions that paper is not “cut-out” for. This is a fresh new market offering, never done before; uniquely based upon the best Ordnance Survey data and other Open Data Sources. We are able to tailor these maps to be the most usable outdoor maps ever for walking, riding, cycling, eventing or anything you could do in the real outdoors.”

OpenLayers Mobile Code Sprint

Last week EDINA had the opportunity to take part in the OpenLayers Mobile code sprint in Lausanne. A group of developers from across the world gathered to add mobile support to the popular Javascript framework.

After a week of intensive development we have been able to add a number of new features allowing OpenLayers to function on a wide range of devices, not only taking advantage of the touch events available on iPhone and some Android mobiles to allow touch navigation, but also enabling the OpenLayers map to be responsive and useful on other platforms, or even unexpected devices!

Jorge Gustavo Rocha and myself worked on adding support for HTML offline storage. Covering storing maps and feature data on the users local browser using the Web Storage and Web SQL standards. Here is the example sandbox which allows the user to store map tiles for the area they are viewing, which are automatically used instead of downloading the online image when possible.  More details on this and other features added can be found on the OpenLayers blog.

I have to say I wasn’t sure what to expect, and I have certainly found it rewarding contributing to OpenLayers and working with such a dedicated and talented team of developers. Far more was achieved than I would have thought possible in such a short space of time. Very inspiring stuff!