Geoforum 2014: Live Blog

GeoForum 2014 (#geoforum2014) takes place at the Edinburgh University Informatics Forum from 10am until 4.15pm today. Throughout the event we will be liveblogging so, whether you are able to join us or not, we suggest you bookmark this post and take a look late tomorrow morning for notes from Peter Gibb’s keynote. Keep an eye on the post throughout the day as it will be updated after every session. We also welcome your comments (below) whether during or after the event.

As this is a liveblog there may be occasional spelling or typing errors. We also welcome additional links, resources, or any corrections to the text if you spot them. 

Welcome – Helen Aiton, EDINA

I’m Helen Aiton, User Support Team Manager at EDINA and it is my pleasure to welcome you to the EDINA Geoforum 2014. We hope you will have a very enjoyable and interesting day. We have lovely weather today for our citizen science excursion. We also want to thank our data providers who are here today, and who you will have an opportunity to speak to during lunch and during our breaks. With that I will hand over to Tom Armitage to introduce our keynote speaker.

Tom: Our Keynote Speaker is Peter Gibbs, BBC Broadcast Metearologist. We booked a weather forecaster and they brought the lovely weather with them! We’ll do that again! Peter will be talking today about flooding.

Keynote Speaker: Peter Gibbs, BBC Broadcast Meteorologist

When Tom first asked me to give this talk he mentioned that hydrology data has been added to Digimap and I thought, what would be appropriate to that? Well I thought something that shows several databases and data sources being brought together would be appropriate, so…

Let me take you back to 2007, summertime in England and Wales. There was severe rain, several people lost their lives, 55,000 homes were flooded. And the government came in for a bit of stick so they set up an inquiry. This was completed in a year ago. The Pitt Review recommended that the Environment Agency and the Met Office should work together. Previously had been a bit of a dichotomy. At the Met Office we predicted the weather, once it had happened we washed our hands of it. And it worked the other way too… it wasn’t exactly joined up. So the recommendation was that this joint grouping should issue warnings together and to do so at a lower threshold – to allow for better preparedness, which might mean false alarms but also better safety and more time to plan.

So the Flood Forecasting Centre was set up in 2009 and it’s a new set of experts basically. Meteorologists have been trained in hydrology, hydrologists have been trained in meteorology and that team works together and operates 24/7.

So, we predict the weather… but weather is a messy thing. This is difficult science. Lets look at a shot of data from 4th February 2009 based on radar data. In weather terms a block of major storms appears here. It’s not very big… but it happens to be on a hill between Exeter and St Austell. And on that hill the rain fell on snow. That caused traffic chaos. But only because it was both intense rainfall, in that spot, at that moment, at that temperature… that complexity is the stuff weather forecasters have to do all the time. And we are increasingly asked to not just predict the weather but also predict the impact and that can be really difficult to do.

So, how do you actually do a weather forecast? Well you start with what’s happening now. Observations are key to setting up a good weather forecast. Surface observations, taken by people on the ground, are an important part of this. Coverage around the world are various… on land in Western Europe is pretty good, Africa less so, high altitudes less so, and over the oceans also big gaps. So you can see already that it’s an incomplete picture.

Radar is good, provides ground truths, very useful but tends to only be over land areas and radar is expensive. We also have a network of upper air measurements via weather balloons… but that network is even more sparse. It’s sending up a packet of electronics once or twice a day that you never get back! And we do also get observations from aircraft. Almost every commercial flight gathers windspeed, direction, temperature, and send that back. It does cover the Ocean but still more Northern than Southern hemisphere. About 130,000 observations a day though so very useful.

Weather satellites cover a great deal of the gaps now though, several geostationary satellites are the real workhorses for weather data here. The location/proximity to the equator varies, but we get great imagery that provides useful parameters on a grided basis. You can get a profile all the way down the atmosphere – not as good as balloons – but still very good especially for sparsely populated areas. And weather forecasters are increasingly data hungry…

Now all that data means there is an awful lot of number crunching to be done. The Met Office has been at the forefront of this, more to do though. But we have a “Unified Model” allowing us to do a full forecast of the globe. That’s important to do for forecasting. Can take that model down to lower areas, we can nest models, but it lets us generate weather forecasts at lots of levels from that model. Now the model data has to all be gridded. The grid ding is getting smaller and smaller as more sophisticated computer modelling lets us reach greater and greater detail. Not long ago our grid resolution was 40km. So for the UK, that’s not that many grid squares. A high resolution 4km model, which we use now, is far more accurate and far more like the mapping you’d see in your atlas.

The model we use for the weather forecasting and for broadcasting is the 4k model. It’s 1.5 x 1.5 km squares across the UK. But you come across a problem. Because weather is global, you have to have stuff happening at the edges, to feed coming weather into your model… weird things happening if you don’t do that and it won’t be that useful. So it’s kind of blended… we have lower resolution at the edges, then more detail at the centre. But we are talking about a three dimensional atmosphere. That’s the horizontal grid area. We have 38 vertical grid levels above that. You can see how hefty the computing power needed is for this sort of forecasting – bringing in that level of modelling detail along with thermodynamic calculations.

So, that model generates a model predicting weather… but you have to be careful in how you interpret the signal from the noise – an increasing challenge as data detail becomes more and more high resolution. There is always a learning process whenever we change to a new data model, and a new data resolution.

It’s not just weather that goes into this model. Other things have to be brought in. Soil type data has to be brought in. Sandy soil heats quickly in the day, cools quickly at night, transmits heather. A more clay soil will be slower to heat or cool for instance. Soil moisture content will also make a big difference. More moist soil slows heating and provides more moisture for clouds. Vegetatian type matters a lot, especially at mid level where we are. And vegetation type changes hugely over the year. And that coverage hugely changes solar radiation and temperature. Sea surface temperature is very important for tropical cyclones. Sea-ice thickness matters a lot – more white ice and there will be more reflection of solar energy. Thickness changes warmth of ocean as well as reflectiveness. Sea surface currents also matter – for transporting warm or cool currents. Some of this data is based on observations, some are based on average annual variation.

There is one big enemy of weather forecasters… and that’s chaos. In theory if you had a perfect representation of the atmosphere and a perfect model then your forecast could be perfect. Now that perfect representation and model cannot be totally perfect, always lots of variation. So, to deal with variation we use something called “ensemble forecast” – making many different forecasts and comparing them effectively.

So, lets take a one day model – a “spaghetti chart” – this shows changes building up… and when we do that ensemble forecasting – and we compare these on a “postage stamp” print out – we can start to see the variance across the forecasts. Can talk in terms of probability… so here we could see northwesterly winds, chilly… Sometimes these all look totally different then the weather is really changeable and unpredictable and that’s useful to know. If they are all very similar we can have a lot of confidence in the prediction. That confidence rating on the forecast is a really useful thing. So here is we look at a “Whisker diagram” we can again see this idea of comparison and confidence in the predictions.

So how does that forecasting translate to the work at the Flood Forecasting Centre? Well we take that forecast, put into a hydrology model with grid to grid modelling. There are four types of flooding: fluvial flooding – rivers overflowing; pluvial flooding – rapid downfall that drainage can’t cope, dips in road, “flash flooding”; coastal flooding; and groundwater flooding – tends to be delayed reaction, saw a lot of that in winter flooding we’ve just had… where water moves through the system and causes belated effects such as underground flooding.

So, for instance, Cumbria 2009. We take the weather model. Data comes into the grid to grid model… each grid points takes water falling out of the sky, The model has relief, river flow, soil moisture, etc. data. And so it tracks where water falling will land, where it will go next etc… water starts to fall and flow into river systems, heading off down out to sea… this is shown in Cumets(?) per second. It’s reasonably high resolution data. That’s then converted into return period flows (QT grids). This is “above 2 yr flood”, “above 100 yr flood” etc. And from that we then get a flood guidance statement. But that’s where human intervention becomes so important, interpreting that guidance statement, and understanding impact, infrastructure.

So in terms of assessing the flood risk for Lowestoft as an example. This uses ensemble forecasting again. We see at 5 days here some uncertainty. At 4 day forecast that variation is more varied. And then 2 days ahead all ensemble forecasts come close together – something big is on the way. The big uncertainty there was how the flood defences would hold up. Defences had been built after last major flood in 1953 but not tested. They did hold. Everyone wondered what all the fuss was about!

Just to finish off… what the Flood Forecasting Centre is moving towards is much more focussed assessment of output. So when my son graduated in Durham there was huge rainfall and flooding… A Victorian wall fell in that rain. The ultimate aim is to be able to run predictions that go to that sort of level – this road, this area, will be at risk. In shorter term forecasts it might be about getting an hour or so warning with radar help. But we are moving in that direction.

Q&A

Q: You talked about the 1km accuracy is way off. What holds that back? Number crunching? Data available?

A: Largely number crunching. The Met Office just had another super computer upgrade but it becomes harder to justify the substantial expense without this sort of clear impact of data. But there is also the issue of how accurate it gets at that level of data. So we are at a sort of holding point? Some say higher and higher resolution is better, some say more ensemble predicting is better.

Q: When you put up grid for the UK with 4×4 at outside, why not take more data from Atlantic as so much of our weather from. And is there thought about variable grids – as such variable impacts.

A: Well we do have edges of edges – at the edges of the 4×4 is the global pattern. But you can only run so much computing power across that grid. All about competing factors. Have to have best possible solution. On the other question part of the ensemble effort is about what the crucial areas in a forecast are, that impact the weather downstream, and you can focus on those to get better quality forecasts for the UK. People are looking for this.

Q: I’m impressed by the increased accuracy of the weather. Is it about data accuracy, better models?

A: All of the factors I talked about: observations; modelling; data; and also the research that goes into our forecast models. And it is so much better than it was. I’ve been weather forecasting for 30 years. When I started 24 hours ahead was good, now 5 days ahead is pretty standard.

Q: How important in climate change in those flooding models

A: Angharad is probably the person to answer that – we can talk more later – but obviously the theory is that a warmer atmosphere means more moisture in the atmosphere, more rainfall and more flooding.

And after a lovely break, featuring caramel wafer,

Using EDINA Datasets in a hydrology project: Darius Bazazi, Hydrological Assessment of Natural Flood Management

I work as a spatial data analyst at GeoPlace in London. GeoPlace is owned by the Ordnance Survey and we provide definitive national address and street gazetteers (datasets) – essentially we do what EDINA do for the Higher Education sector, but for the public sector.  So the AddressBase features point data for every address in Great Britain – will full postal address and a unique property reference number – allowing tracking across the lifecycle of a property.

So, a quick example of how Address data is used in Flood Management… we have zones that show high, less high and low risk. So for this, for instance, allow emergency services to target resources in flood events. And also the property data lets us estimate more specific risk. Why does this matter? Well when you don’t have good data… Thames Water was fined £86 million for overestimating flood risk for instance.

So, today I’ll be talking about hydrology and talking mainly about maps. I know we have a room full of experts here so I hope you’ll pick me up on anything I get wrong! So, I used three EDINA datasets in the project I’m talking about – Land Cover Maps 2007, Digital Terrain Model (DTM), and some OS? mapping. I really enjoyed using the EDINA Digimap and the GUI there.

The project was looking at soil moisture and to calculate a regression-based equation for PROPWET (soil moisture). Currently engineers use a constant value for catchment PROPWET (Flood Estimation Handbook) so we wanted to test and develop a regression based model. And GIS is a good tool for that because we can bring in so many variables etc.

Natural Flood Management involves using nature to mitigate flooding – currently difficult to analyse that. But this equation sort of feeds into that. Looking at how we can vary different ground conditions in the modelling. For instance NFM is used at Eddleston Water in the Scottish Highlands. There is 69 square KM waterway that is part of the River Tweed Catchment. So, PROPWET measures the proportion of time that soil is wet. Value is currently taken from the Flood Estimation Handbook. But that Handbook is quite old and gives quite static measures of moisture.

Looking at the river network in schotland, and the river network monitoring stations, we can see how data is gathered and what it’s coverage is. Using a Halcrow Analysis we can see variation of data across Scotland. It’s a pretty good but we wanted to broaden that analysis across Scotland. So, the Land Cover Map 2007 was really interesting to work with. I obtained vector and 1km raster. My desktop GIS couldn’t handle the scale of the vector mapping so I had to use the raster mapping (which is based on the vector). This covered Scotland across river catchment. LCM2007 is a parcel-based classification of 23 land cover types.

So I took that raster data. I generated some catchment. You can get them from the Centre for Hydrology and Meteorology but we thought it might be interesting to generate our own catchments. So this we took the Digital Terrain Model and made some transformations to produce catchment delineation. I wanted to get areas within in 10% accuracy, only got to 17%. So I decided to use the Centre for Hydrology and Meteorology shape files instead. And then I did some geo processing, using ArcGIS python model builder to iterate zonal stats maps. And looking at potential evapo-temperature models. And looking at Lead Area Index (vegetation cover) and Surface Resistance.

The result was an equation that is appropriate to Scotland but, had this not been a university MSc project it would be nice to expand the model and cover more of the UK.

The OS have a new dedicated water product, a new Water Layer (beta) which has been created with the Environment Agency, SEPA, INSPIRE. And that will have huge potential I think.

Q&A

Q: How did you choose the size and location of the catchments? How do they choose their catchments?

A: I did used the Centre for Hydrology and Meteorology catchment areas in the end. I’m not sure how they select their catchments, an interesting question though.

Digimap Support: Carol Blackwood, EDINA

I will be talking both about what we have happening at the moment and what developments are coming up.

Firstly, we have a new Digimap Registration System coming soon. Currently users come in, they need to register before accessing the data. We ask you what you want to use, who you are, and we give you a lot of legal terms, we ask for lots of information and ask you what collections you want access to, you see a summary and submit that… then you have to wait. About 24 hours but up to 48 hours. Currently we have to eyeball every single registration, manually approve it, we run a script overnight, and then you get access.

So, for services these days that’s rather below expectations… we are used to giving information up but you are used to then getting access after a few minutes. And we know some students don’t remember to register until quite last minute. Their expectations are high and for fast registration. And our new licence has enabled a few changes… so we have been working on a new registration system.

As we began to redesign the process we have also been using “user stories” – pathways that users will take as they encounter our system. We had some requirements too. We took the user stories and some of our requirements to work out our priorities… what MUST we do, what SHOULD we do, what COULD we do… ?

So, the new system will be a lot simple. A new user will come in and login via Shibboleth credentials. You will register with your user details. You will get an email to validate your email address. And once that is validated you will be able to activate access to collections – choosing selections, agreeing to terms, and defining the purpose you want the data for. And then you will have access. This process should be 10 minutes ish, not 24 or 48 hours. This should much better meet user expectations, including those who have registered a wee bit late for their deadlines!

When we roll out the system we won’t have any change for existing users – no need to register again. We will renew activations after 12 months – a new change but you won’t need all the initial data entry again. It will be very simple, no interruptions. And we are adding a new registration for Historic Collection users – that’s so we understand who is using that service, so that we can support them better. Again it will be quick and painless.

Clearly this is a big change for us. We are anticipating it around early September 2014. To ensure a smooth transition we will provide help pages, how to guides, videos, webinars, and anything else that would be helpful – let us know what they might be? We can provide text for handbooks for instance. Just let us know. But this will be lovely. It will be simple, easy to use, and much quicker. And it benefits us as at peak time our help desk has to focus heavily on registration at the moment, they will be freed up at those peak time for other support activity.

The other part of my talk is about some recent updates. The first of these is out new Live Chat function – you can find this from our Help Centre, and you’ll be familiar with this sort of functionality from other websites but basically it lets you ask live help questions via text chat. So come and chat with us, and let your users know they can ask us questions this way.

We have also been creating a whole new series of online videos – for specific tasks and tools – as well as number of new service demos. Both of these sit within our help centre. The demos help explain the interface and how to get started – can be really useful to promote or support users in getting started with Digimap.

We have also been creating some information for you to take and use on your own local pages: descriptions of all the collections, logos for Digimap, sample images, textual information. This is always the definitive source of up to date information on the services – you can always come here to get text and resources for your own use in webpages, documents, etc. So please do have a look and make use of this.

And we still have our traditional in-person training courses – in Derby, Oban, Bradford, Swansea, Southampton, London, Newcastle. If you’d like to host one just let us know and we’ll be happy to work with you, to let you know our requirements etc. We have also been out doing Digimap workshops tailored for history students; Digimap and GIS workshop for earth and environmental science MSc students, etc. If there is something tailored that you would like, just let us know and we will be happy to help.

We have also been out and about running workshops: British Cartographic Society Map Curators workshop; Enhancing Field Learning; Research Data Management for geo data, etc. And our colleague Tony Mathys has been running metadata workshops and giving a paper at GEOMED 2013. If you are interested in metadata, Tony is your man, just get in touch.

We are also running an increasing number of Webinars, run by our colleague Viv Carr, and these are very flexible 1-2 hr sessions usually, easier to fit into a working day often. We have a series of these on Digimap Collections, Roam, Historic Digimap, etc. And we also take requests, so just get in touch!

And, if you want to find out where we will be do take a look at the new Events page on our blog, keep an eye on the EDINA Events page – where training sessions and webinars are advertised – and keep an eye on GoGeo for more on Tony’s Metadata workshops.

And… a final note on what’s coming shortly… our engineers are currently working on Site Rep Stats for all collections, as well as the next run of annual calculation of data values used by your institution – and there will be more information about that.

So that was a very quick round up of what’s happening. Basically, if there is anything we can do to help just get in touch!

Q&A

Comment: Just to reiterate something you said. The tailored session on GIS for history students that you did for my masters students was just brilliant. I would recommend that. It was great for my students to see their subject from that different geospatial perspective, really useful for their understanding. And there are a lot of really good resources there for those wanting to carry on that learning.

And now time for lunch including Service demonstrations; Project demonstrations; Meet the map providers; Share your Vox Pops on Digimap.

 

During the next sessions the group will be splitting into two groups. One (those with green dots on their badges) will start with the Citizen science in your pocket: Excursion, the other will start with the Digimap Update session. This blog will, however, be sticking with the Digimap Update session throughout the afternoon.

 

Digimap Update – Guy McGarva

I will be covering what we’ve done in the last year, what we’ve got coming up and what we will be doing in the future.

We have been listening to your feedback, through surveys, support requests, discussions – including chats at training events, events like today, etc. and these feed into our Quality Improvement Plan.

Since the last GeoForum we have launched a new collection, Environment Digimap. This is data from CEH and includes Land Cover data from CEH for 1990, 2000, 2007. We have made improvements to all of the Roam mapping clients within Digimap. Roughly a year ago we highlighted that these were coming, these have now been implemented. There is a new interface for all roams so that they are now cleaner, more consistent interfaces. They are all essentially the same with minor changes for different collections. This makes updating, maintenance, easier but also makes them more consistent for use and exploration. Whether coincidence or not we have had greatly higher usage over the last year. We had well over 1 million screen maps generated in the last year – we must be doing something right! We have also seen general usage on the rise, downloads too.

And we have also made significant changes to the print interface in all Roams based on feedback we’ve had from you, from users. So part of that is the layout review – seeing the area you are printing – no more wasted tie creating PDFs of the wrong area – we’ve seen PDF generation go down despite usage going up, we think that’s because users are getting the right map first time much more often. We now have variable scale printing in all Roam except Historic. You can choose the exact scale you want of your map and can print at any size from A4 to A0, whether PDF, JPEG, PNG. And you can now print overlapping maps in Ancient Roam. Some fairly significant improvements there.

We have also improved search. It doesn’t mean much visible change but we now include street names and partial postcodes in searches – providing more options for exploring the data.

And finally… we now have printable Legends. You can create a printable PDF legends in all Roams except Historic. And Geology Legends are now Dynamic – i.e. they only shows features on the map. And we might, in future, that concept of dynamic legends to other areas of Digimap. The printable legends come as PDFs, you can add them to your map, etc. So that’s what we’ve added in the Roam client.

Over the year we are trying to migrate all of Digimap to the new Data Download Clients, based on the OS Download client. We started by launching a new Geology Download – this went live in August 2013. And we have added new data: Northern Ireland Land Cover Data from CEH and more Geology data from BGS, including flooding and permeability. And whenever we make updates to data, or to interfaces we post these on our blog – any big change will appear that.

One of the most significant changes has been that, for a long time, OS MasterMap data was only available in GML format.. that’s how we get it from Ordnance Survey but it’s not easy to use in GIS programmes – it always required some sort of processing or conversions. For some users that was almost impossible, we has a lot of questions from CAD users about this, and generally this format issue was particularly difficult on a Mac. So, we now have a way to convert data to DWG on the fly. This has already been implemented and makes life a lot easier, particularly for CAD users as most CAD systems will support DWG files. Feedback on this has been really good, it lets people use data right away rather than having to make their own conversions.

That’s what has happened. We are now working on providing MasterMap as File Geodatabase – that will make it easier to load into ArcGIS and QGIS. And we are using FME for on-the-fly conversion.We won’t be doing shape files, as the shape files generated from this data are too massive to process hence using File Geodatabase. That will be coming soon. Also Ordnance Survey have data on building heights (alpha version). This is a very new thing, an alpha product and a first release… but we want to make it available to users. We will be taking that data on MasterMap building heights and providing that, initially at least, as KMLs – and you can see a demo of that in our demo area today. That’s all buildings across all of the UK, and that data should be really interesting, particularly for architects. Eventually that data will just become part of MasterMap.

And we are keeping on adding new Basemaps in Roam. These are variants at a particular scale or different data set. This gives you the option of different Basemaps at different scales. So VectorMap District 0 raster and vector versions will be available as alternatives to 50K. And we will be replacing OS 10K Raster with VML Raster – that’s because OS are withdrawing 10k Raster later this year. And we will hopefully also be adding 3 versions of Miniscale too.

We have new improvements coming to Marine, a fairly major update in fact. We will be getting new High Res Bathymetry data for Marine – now 30m resolution rather than 180m, so that’s a really massive difference. And we will be getting updates to the Hydrospatial and Raster Chart data. And there will be a new Data Download Client – in the same style as the other Digimap Collections.

We have had a project with the National Library of Scotland – who digitised new metadata for all the historic maps we had. That’s resulting in improved metadata for Historic data in England and Wales, and improved dates for publication and survey (Scotland already has it). And, again, we will have the New Historic Download client.

Still to come…

We are working on some updates to the Data Download basket – to make it easier and clearer to select different formats and dates, which isn’t hugely easy to find right now. When we get a new data set we don’t just delete the old data… you can go back and select last years data or data from two years ago – you can get versions of the data as well.

We have some small changes to make to the Digimap Home Page. Because we have rationalised our clients we don’t need so many expanding menus etc, just make it much more tablet friendly. And, as mentioned earlier, we will also have Site Rep Stats for all Collections – so as a site rep you will be able to see data on how Digimap is being used in/by your institution. We would love your feedback on any of these aspects, and we will continue to inform you of changes like this through the Digimap blog.

And one thing we hadn’t really mentioned yet…

We have a new service coming later this year is Digimap for Colleges. This will be based on the Digimap for Schools functionality – which means simpler somewhat stripped down functionality, and a Roam-style mapping client. This bridges the gap between schools and university, particularly useful for those doing A levels for instance. And it means FE institutions will be able to use Digimap without needing shibboleth support – but FE institutions will be able to choose between Digimap for Colleges or Digimap for HE. So that will be coming pretty soon.

We are also working on other geospatial stuff. We are working with the UK Data Service Census Support – which will become “Open” shortly meaning no login will be required to download most of the data. Some data sets will still require login, but most will become open. That’s all Census data, Scottish Census data, etc.

We have also making changes and enhancements to other geo services. Unlock has been expanded to include historic English place-names, data generated through the DEEP project, and improvements made to the API to allows you to search for these old place names. And GoGeo will have added language conversion support, support for metadata standard Gemini 2.2, and updating GeoDoc resources, and also a mobile version of the site.

In addition to our services we are also working on a lot of different geo projects at the moment. COBWEB is a major EU-funded project with partners across Europe and involves citizen science in Biosphere Reserves. We have the Palimpsest and Trading Consequences text mining related projects, and the Spatial Memories project. A lot of these are based around, or include development of, mobile apps.

In terms of our longer term aims we have been doing a lot of work to integrate geo services together more – including Unlock, FieldTrip GB etc. We have been making more use of available data feeds – many suppliers providing WMS now and we are trying to think about how we can use that more and most effectively. We want to make user workflows simpler and more complete. to provide solutions to common problems to do with the data life cycle. Improve utility and value of services. And improving our support resources and capabilities.

So, that’s where we’ve been, where we are going. What do you want to see next? We would welcome your comments, ideas, any wish list items about data, functionality or support that you or your users may want. Let us know now or just get in touch after today…

Q&A

Q: Are there any enhancements planned for the usability of Roam – zoom, pan, selection boxes? Are you moving to a better internet-based software for that or will you be sticking with what you have now?

A – Tim: We have a project to look at this. We are planning to make it tablet friendly as that is clearly needed.

A – Guy: It’s the data too. We use Open Layer 2 toolkit, there is an Open Layer 3 version coming soon so we can see what becomes possible then.

Q: You said you can select dates for data download. One significant issue in Digimap is lack of earlier maps?

A – Tim: We have LandLine from 1995, and MasterMap from 2006/7. We hope to make that and earlier data available. We’ve been doing a lot of work to make that happen but it is more of a licensing and usability, rather than a technical challenge.

Q: Will KMLs of buildings be textured?

A – Guy: No, right now we have OS point and building heights as a CSV and we will be extruding that data to create the buildings KML, maybe other formats. Once building height information is out of alpha, OS are saying that it will be part of MasterMap as an attribute. In the meantime we’ll supply that data as the CSV but also in KMLs.

Q: Anything else happening in OpenStream?

A – Guy: Only to update the data. But we also want to look at what levels of WMS we can provide. We are certainly thinking about that sort of data and technology can be used more widely.

Tom: Do give us feedback on anything you’d like to see added – an email at any time is fine.

Q: Are there plans for other Ordnance Survey data sets to be added?

A – Guy: we are still working with OS on the new license. We ware still waiting to see what OS can make available, what we can get it for. If there are particular data sets you would like to see added do let us know and we can look into it. But if it’s not available yet, you can approach OS. And letting OS know you want some data will help encourage them to think about it being added to the data available in Digimap.

Q: I have been looking at OpenStreetMap – any plans to provide, say, Shapefile downloads from OpenStreetMap?

A – Guy: We are always looking at that type of thing. If there is demand that could be really interesting, particularly for non UK mapping.

Q: For some of the Marine Stuff, stuff across the North Sea would be really helpful…

A – Guy: the new data we have is better in terms of extent, especially the charts.

Q: With the enhancements being made to Roam, when you can print to A0, what’s happening with Carto? It used to be that Carto and Classic were very different. You used to be able to customise the maps in Carto.

A – Guy: You can do that in Roam.

A – Tom: But you can’t merge products as easily… but many of the vector map products integrate more of the products you used to have to combine. And for things like Contours we are adding those as an overlay to make it easier to print.

A – Guy: We are trying to ensure functionality of Carto is in Roam, but easier to use. And Roam has a lot of additional functionality, particularly around Legends, that neither product would have had before.

And now we will have a break. Look out for tweets from the FieldTrip GB Excursion (#geoforum2014) and join us on the blog for the second round of Q&A from the next Digimap Update session, later this afternoon.

Q&A – Second Session

Q: Have you considered European mapping?

A – Tom: It’s something that we are being asked for more and more, particularly as we have more non geographers using Digimap. Let us know if you have people come and ask about this. Other countries have good mapping agencies – Denmark for instance.

A – Guy: We are certainly getting queries for data in Germany, France, etc. And it might not be about us providing the data but being able to let the user know where the data is, how to access it, if it’s available for free.

Comment: There is a European data set, could help.

A – Guy: This can be tricky as often it’s multiple data sets with different licenses. But if you know of suitable data. When we got marine data here we did find out about other marine resources and we’ve made the links to those available – we are happy to consider adding data but we can always link to data from elsewhere too. I didn’t mention ShareGeo but that allows us to share ad hoc data sets… sometimes we create data sets and share it there as we know its tricky to obtain from other places.

Comment: I went to OS for data for my MSc – wanted data at about five levels and the 0.5 m level.

A – Guy: If you do find data sets we don’t have then do let us know – and the more people who go to the data supplier requesting a data set, the more easily we can show demand there from the sector.

Comment: I looked through ShareGeo for some data – and found some really useful stuff there. I’d recommend looking through that but also sharing your own data there.

A – Guy: We do create a lot of the data sets there, in response to queries, when we find useful data, etc. But we would love to get ShareGeo used more, to get it out into the mainstream. Would be great to get a WMS feed generating from the data there too – Shape files are useful but…

Q: Is there any chance of renegotiating for the imagery?

A – Guy: No prospect at the moment. We may try again. It is OS imagery. But there are other sources that can be looked at. And the spot imagery that will become available later this year should be open source.

Comment – Angharad Stone, CEH: We’d like to get the environmental Lidar and imagery data available – not full coverage but good quality. In the meantime you can use it free for academic use but you’d need to come to us (CEH) directly.

Closing Remarks – Tom Armitage

Firstly we would like to thank all of our speakers, all of the suppliers for providing great demos. Thank you for some fantastic questions from all of you and all of your continued support for Digimap. There will probably be a Geoforum next year, possibly further south, so keep an eye on the blog for more information as always. And all of today’s presentations will be available via the Digimap blog and possibly elsewhere on the EDINA website too.

And, with that, we close the event. Thanks for following along on the blog!

Share