eLearning@Ed Conference 2012 LiveBlog

Today I am at the eLearning@Ed Conference 2012. This is an annual event focusing on experiences, innovation, and issues around elearning and based at the University of Edinburgh. As usual this is a live blog and will likely contain typos and occasional errors – do leave a comment if you have a correction!

Please note: the LTS team are livesketching the day with an iPad today as well: http://tweelearning.tumblr.com/

::: Updated – you can now view all presentations here :::

The schedule for today (and these will be updated and transformed into headings as the day progresses) is:

Welcome – Professor Dai Hounsell, Vice Principal Academic Enhancement

It’s lovely to be here this morning and to be reminded of how wonderful a place to work this is with such a creative and innovative community. And this is such a wonderful Edinburgh title “Pushing the Boundaries, Within Limits”. Indeed you may recall a campaign for Glasgow called “Glasgow’s Miles Better” and someone created a mini local Edinburgh one “Glasgow May be Miles Better but Edinburgh is Ever So Slightly Superior”.  But that note of caution is sensible. There has been so much talk about how elearning is going in mainstream that we can lose sight of how

We are pushing boundaries but then what sits within those boundaries is really changing. The University in 2012 would be unrecognisable to someone stepping out of a time warp from 1992 say. I think many of our practices and notions of what makes good teaching can be the consequence of old ways of doing things. That’s part of the challenge of breaking boundaries. A lot of our boundaries are part of the past. If we had started with word processing rather than pencil and paper would feedback have become a thing we do after the fact? And if we think about collaborative learning it really challenges some of our colleagues in terms of what they think is right or fair, some funny words can come back in response like “collusion”. As an aside a colleague speaking in Scandinavia found there is no word in Swedish or Danish or Norwegian for “collusion”, it’s all just “collaboration”.

When our colleagues get nervous about the possible downsides of students collaborating together we have to recognise that they won’t change overnight but we also have to realise that it’s valid and right to push them. And on that note I shall hand over to Wilma.

Wilma Alexander, chair of the eLearning Professionals and Practitioners Forum is welcoming us and telling us that eLPP is changing it’s name officially today to eLearningEd. This is intentionally less obscure and should help to clarify what the group is about and particularly to help colleagues in the University understand what we are about.

So to introduce our first speaker: Grainne has been invited along today because much of her current and past research looks at the kinds of issues Dai has talked about in his introduction

Keynote – Openness in a Digital Landscape. Professor Grainne Conole, University of Leicester. Abstract

I’m going to talk a little bit about the notion of openness which I’ve been working on at the Open University and more recently at University of Leicester where I’ve been since September. I’ll be talking about technologies trends. I’ll talk about learner experience. And I’ll talk about open practices – Wilma pointed out the hashtag for today (#elearninged) and how many of you tweet [it’s most of the room], that sort of thing is really changing what we do.  Then I’ll be a little more negative and talk about teacher practices and paradoxes. I’ll talk a littloe about new approaches to design. And then I’m going to talk about metaphors and the need for new ways and types of descriptions.

Technological Trends (http://learn231.wordpress.com/2011/10/25/trend-report-1). In the 2012 Horizon report we’ve seen mobiles and e-books highlights. In Leicester the Criminology masters programme have just given all of their students iPads as part of the package. We have Game-based learning and learning analytics – that latter is a sexy new term to explain the types of analytics we can gather on how people learn and use our materials, resources, tools. Gesture-based learning and the Internet of Things – there was a lovely article on the Guardian. See Also: Personalised learning, cloud computing, ubiquitous learning, BYOD (Bring Your Own Device), Digital content, and Flipped dynamics between student and teacher.

If you Google or look on YouTube Social Media Revolution and also The Machine is Us/Ing both of which really give a good sense of how things are changing. And you might also want to look at a report we did for the HEA where we looked at some key features of Web 2: Peer critiquing; User generated content; Networked – this is the power of tweeting; Open; Collective aggregation; Personalised. The report is: http://www.heacademy.ac.uk/assests/EvidenceNet/Conole_Alevizou+2012.pdf. If we had time

Gutenburg to Zuckerberg – John Naughton (blogs at http://memex.naughtons.org/) and it’s a great book. And he says: take the long view – we could never have predicted the impact of the internet even in 1990; the web if not the net; disruption is feature; ecologies not economies; complexity is the new reality; the network is now the computer; the web is evolving…

Sharpe, Beetham and De Freitas (2010) found that learners are immersed in technology; their learning approaches are task-orientated, experiential, just in time, cumulative, social; they have very personalised and very different digital learning environment. I have two daughters, one is very organised and very academic in her use of technology but she thinks Facebook is the work of the devil. The other daughter is dyslexic and is quite the opposite and loves Facebook. Who loves Facebook? Why? Who hates Facebook? Why? Our students will also be conflicted, have different views. And our students will be using both institutional technologies and outside tools

Open. Open Resources span a huge range – there has been huge funding for the OER spaces like MERLOT, MIT OpenCourseware, OU Learning Spaces etc. Increasingly research here shows that making OER available isn’t enough. In a recent report (http://www.oer-quality.org/) and the OPAL site we looked at what sort of support people need to use OER effectively, I really recommend the recommendations and that OPAL site if you are interested in OER.

Open Courses. These Massive Open Online Courses (MOOC) get huge numbers of participants but high drop out rate. Really interesting to have open educational materials and open courses (http://mooc.ca/). There is also the Open Access University in New Zealand.

Martin Weller, author of Digital Scholar and blogger, talks about open scholarship and exploiting the digital network, new forms of dissemination and communication. I use Twitter on a daily basis and am connected to about 4000 people there, the speed of disseminating information through Twitter is unprecidented and very core to my practice.

Thinking about Open Research I wanted to talk about some of the spaces I use. My blog, e4innovation, is core to what I do. Repositories have become a core part of what we all do – we have the REF coming up and those repositories are being scrutinised in more detail. And there is use of things like wikis and semantic wikis, bookmarking like Diigo, Slideshare, Dropbox, Academia.edu etc. Although I tend to use Twitter and Facebook mainly. I’m on Google+, Academia.edu etc. but don’t tend to use it.

Really interestingly Google now has a Citation tool within Scholar and you can set up a profile. And for sure these will be increasingly used for promotion, for REFs etc. This uses an algorithm from Physics I think. I applied to be a visiting lecturer recently and they asked what my h-index was.

Teacher practices and paradoxes – there are huge opportunities here but they are not neccassarily being fully exploited, we see replication of bad pedagoguey (electronic page turning for example). And intensive research universities like Edinburgh there is also a real tension between teaching and research because promotion is based on research not teaching practice and that pressurises time and attention.

So thinking about Learning Design we have been building up a series of principles. At Leicester we have Carpe Diem workshops on learning design and we’ve been combining this with some JISC work quite effective. Our 7 Cs are Conceptualise then… Capture, create, communicate, collaborate, consider. That’s an iterative cycle. And at the end of that you Consolidate.

In September we will be launching an MSc in LEarning Innovation using much of those learning design resources to think about how we approach this new MSc. So I’m going to share some of our slides and resources here. The Programme includes a series of “e-tivities”. We trialled this with a group of sessions with teachers in South Africa online over two weeks with 8 slots of 1.5 hr face to face sessions and additional work around this.

Peter Bullen and colleagues at the University of Hertfordshire has this concept of How to Ruin a Course – a great way to think about and improve a course. So we used linoit.com – a virtual sticky board – to think about what would and would not be included, what elements would be needed, and what would definitely not be in there. And then we colour coded for types of course content (eg communication and collaboration, content and activities, guidance and support, reflection and demonstration). And worked through this in Google docs, mapped this into a course map. And that has been pulled out into a plan for the course, technologies and expectations. The point about these different views is that they are designed to be iterative and improved over time. They may look simple but they are grounded in good and substantial imperical research.

We have also tried to reuse as much OERs as possible, to adapt others, and to create as needed. We’ve done a learning design resources audit to think through all that we need to deliver this course. We’ve built in various aspects, we decieded we wanted some podcasts, maybe a little interview or snippit of people like Diana Laurillad and at the OU we found students found these sorts of snippits really enjoyable and useful.

And then we’ve broken down the course activities into Assimilative, Information handling, Communicative, Productive, Experiential, and Adaptive activities. We have a little widget you can use here. And that gives us a picture of the type of profile of a course and lets you adapt it over time. This view can also be used quite significantly with students. I did an OU Spanish course and you get this amazing box labelled “Urgent: Educational Materials”. When I did OU Spanish my weakest area was communication by far. There is a really interesting link between what the course profile looks like and what the students need and take in.

As we started looking at the Learning Outcomes…. We didn’t do that first as you can get too stuck into the words here, easier to look at this later when you have a sense of what will be done. And then we can draw things together looking at how the Learning Outcomes and the Assessment (and all learning outcomes should be assessed) and how these are hit along the timeline of the course. So we mapped that conceptual model. And then we went back to linoit and set up a week by week outline where everything comes together. We can then drill down to a “task swimlane” and put into a little template for the e-tivities. And we are also drawing on some nice tools from the OU library in terms of information activities etc. And then finally we have an action plan for how we do this, a detailed thing to close the loop. These kinds of workshops can be very stimulating but you have to be able to follow up in a practical useful way.

And finally…

Metaphors. The ones I’ve been playing with are:

  • Ecologies – the co-evolution of tools and users, a very powerful metaphor; Niches colonisation of new habitats – Google+ perhaps; Survival of the fittest
  • Memes – particularly drawing on Blackmore here: something that spreads like wildfire on the internet, but perhaps we’ve gotten too cosy here
  • Spaces – campbell 72 talks about the cave, the campfire where we present, the mountain top, the watering hole – how might these apply in elearning
  • Rhizomes – the stem of a plant that sends out roots as it spreads… multiple interconnected and self-replicating and very like ideas and networking. Drawing on dave cormier here. Those of you on Twitter will recognise that sort of close furtive network of connections I thin.

The future of learning: technology immersed, complex and distributed… fuller notes on Slideshare: http://www.slideshare.net/GrainneConole/conole-edinb urgh.

Q&A

Q1) You talked about Learning outcomes need to be assed, can you talk more about assessment

A1) Assessment is fundamentally about articulating whether students have understood what we wan them to learn. I’m certain our old approaches are no longer appropriate. One of my daughters was

Q2) I was interested in your last slide about digital futures and was interested in whether you had looked at opening up coding practices

A2) I was involved in a project around x-ray chrystallography as Chemistry is my original background. Making raw data available we have questions of ethics and a very different way to share our ideas when still developing. But when I blog things openly I get feedback that improves the work. I think more open approaches particularly regarding data coding could be really interesting

Q3) What can be done to reduce the marginalisation of those not already using technologies?

A3) A lot of teachers do feel threatened, they are under a lot of pressure. I think this goes back to day 1 of lecturing in Chemistry. I was given a bunch of content and drew on my experience. I learned as I went and I think that’s how a lot of teachers start. I think we need to ease teachers into to easy conceptual tools that let them assess what technologies may or may not be useful – they don’t have to use everything, they can’t possibly know everything, it’s about baby steps.

And on to our next speaker…

Motivated, Omnipotent, Obligated, and Cheap: Participating in a MOOC (Massive Open Online Course) – Jeremy Knox, PhD candidate, School of Education. Abstract and Biography.

The research I will be talking about today is my PhD research on MOOCs which has been a participant observation pilot here based on three different MOOCs: Change 11, Change Education learning and technology – George Siemens, Stephen Downes and Dave Cormier; Udacity CS101 – Independent company created by Sebastian Thrun; MITx – first course offered by MITx.

MOOC stands for Massive Open Online Course. Udacity published 90,00+ enrollment numbers; MITx published 96,00+ enrollment numbers; Change11 has less, perhaps 1,300 active in the first three months based on my experience so far.

Open is perceived in the MOOC as both Open Access and Free. And for both Udacity and MITx that is what they do. That’s also why participant numbers are so hard to estimate for MOOCs – the door is open to entry but also to exit. Big gaps between enrollment and active participation. In the Change11 MOOC there is a more open curriculum and can decide their own outcomes and are encouraged to self-assess – a slightly different model.

Online tends to come down to either a central or distributed space. Udacity and MITx have central spaces where all the learning takes place – a little like an institutional VLE basically. So you have a central space with video lectures, notes etc. Again this is a point of difference with Change11 – all their content is created by participants rather than one organisation. So it is distributed across the web – blogs, twitter, etc.

Courses – MOOCs are structured courses. Udacity and MITx are very traditional with clear aims and objectives. Here you have to learn about building a search engine or about circuits and electronics. In Change11 students far more choose what they learn.

Hopefully that gives a sense of what a MOOC is, that there are various models in use here. So I want to talk about some terms I think might also define the MOOC.

Motivated – a central aspect of being a participant in the MOOC. Downes (2002) says that if you are not motivated then you’re not in the MOOC. There is an assumption of motivation and no central intent to encourage, support, motivate students. Perhaps an issue mappable to wider OER discussion. And some work by Downes found that as little as 4% of participants are active in the MOOC. Here I’m showing a viualisation of communication on Twiter between Change11 participants – you can see a small number of highly active participants/course members.

Omnipotent – is perhaps more relavant than open. They are sold as learners having lots of control over the learning process. They promote learner defined aims and self-assessment. That implies an innate ability to self-direct within the MOOC which we’ll come back to. Traditional education is framed as a passive process within this type of promotion. I suggest this isn’t just about Change11 which heavily promotes this way but also about MITx and Udacity the same need for self-directed students is assumed. The MOOC dissolves itself from responsibility for the students.

Obligated – Change11 requires students to aggregate, remix, repurpose and feeding forward. Participation is seen as essential in the MOOC. This is down to the model of the network that underpins connectivist theory and the MOOC. The more connected you are, the better the learning is. The network isn’t an analogy for learning, it is learning in connectivism. So as the network decreases so does the learning. So something to say there about collaboration. There is a tendency in the MOOC to enforce participation – important for the individual but also essential for the whole. So despite the idea of autonomy the network is crucial here.

So I think Omnipotent and Obligated are real clashing factors here… a problem for the MOOC.

Cheap – perhaps in the financial sense. But more in the sense of responsibility. Learners are responsible for own motivation, they must self-direct, in Change11 they have to decide outcomes and self-direct, if the learners don’t participate there is no course. There is a tendency for MOOCs to shift responsibility from the institution to the student.

So to finish… I would suggest that to rephrase Downes to “if you’re not motivated then it’s not my problem!”. Now I think there is an arguement for the institution or organisation to take that responsibility.

Q&A

Q1) I’ve participated in Change and I was a wee bit late contributing materials. I was excited to take part but it was rather demotivating as little was going on. Rather than Cheap perhaps Collaborative is more appropriate. Is that a better word than cheap?

A1) Yeah I think that’s part of it but I wanted to get at the fact that the institution should be involved. I think collaboration there would have to mean the institution also collaborating in the process.

Q2) Aren’t you trying to impose formal learning expectations onto an informal, lifelong learning space?

A2) I think I am questioning whether being able to self-direct is innate and whether this discourse of openness and access is actually right as these are not neccassarily innate things, that access to technology and understanding is not open, these are learned things.

Q3) I’ll come back to some of these issues but there is an interesting philosophical difference in France where courses were open and people can join and disappear. Perhaps this about opening opportunities for people to find out more and explore that learning but perhaps dropping out of these spaces isn’t a failure but a choice also.

A3) that is a fair point.

Wilma is now talking about the university of edinburgh’s innovative learning week which took place for the first time this year and our next speakers will be reflecting on that experience.

Case studies – Law less ordinary: reflections on Innovative Learning Week in the School of Law – Dr Gillian Black, School of Law.

I want to talk about one of our most successful ILW events. This was our Criminology photo competition organised by one of my colleagues who lectures on the criminology degree. She asked students to identify images from news, videogames, films etc. around crime and injustice. The challenge was to use the image and use text to change our expectations. This was set up on PebblePad and you needed to send in an image, text and the name of contributors. Students took images, shared them with commentary. And she also wanted this to be freely available and publicly available. You had to login to add images. But you could comment as you would on a blog. It ran from the beginning of January to he end of Innovative Learning Week. It was very popular.

I think the winning entry was an image on the idea of “Facebook Rape” or Frape. The success was such that Dr Suami is looking at running an exhibition of these images. And that reenforces that this didn’t just happen online but also was part of our offline practice as well.

Why did this work? Well Dr Suami is a very popular lecturer with enthusiastic students. And it was fun. But those of us who found it difficult to get students along in person perhaps will understand that an advantage of this activity was that students could take place at any time and on their own terms. I hope this will have a lasting legacy.

The other aspect here was that the activity did cross courses, engage colleagues, really brought the programmes together.

Followed by: Changing Atmospheres – The 1 Minute Film Project at the School of Geosciences. Dr Elizabeth Olson.

This project involved 5 academics designing this over two months. We set undergraduate geography students a challenge! We set them the task of recording audio and video separately and then making a one minute film about it. So there was a technology aim here. It was a two day challenge. We trained them the basics of filmmaking – a good shot, storyboarding, artistic outputs, sound recording. Sent them out for 5 minutes to capture stuff. Then we had a full day for capture. We borrowed tools – H1 and H3 zoom mics, HD camcorders that the department has for research. We used Mac Pro and PCs – brought in some extra kit of our own into a lockable room. We ended up using Audition (free software) for audio, And some of our free tools we used what software we had so Adobe Premier CS5 and Final Cut Pro – we didn’t have to induct them in any of the software really.

Feedback we had was really interesting – the storytelling aspect complimented everyday practice. A worrying comment that this was the most useful 2 days of the year! And another found it invaluable as an opportunity to explore the city as good geographers from a very different angle. We let students vote on the films so I’ll show them from least to most voted on films. [great wee films although speeded up scenes seem particularly popular]

We had increadibly popular feedback, a lot of students want to carry on filmmaking as a hobby, and students have talked about using film and photography into their assessed work. It was increadibly labour intensive, increadibly good fun.

After a short tea break we are back for some case studies which are just being introduced by Marshall Dozier

Case study – 2012: A MATLAB® Odyssey – Antonis Giannopoulos, School of Engineering. Abstract

Really I should have Dr Craig Warren, my former PhD student, as author, it’s all his work but he is on holiday at the moment!

So I will be talking about turning a traditional lecture based course into a largely online course. But lets start with what MATLAB is, how we used to teach it, why it needed to change, the aims of the new course, what new material was creates, what tools we used and some feedback.

So MATLAB is a programming environment for algorithm development, data analysis, visualisation and numerical computation. But it’s about problem solving, they don’t come in to learn programming for it’s own sake. We teach some sort of programming, usually in second year, in Chemical, Civil, Electrical and Mechanical engineering – we all arrived at MATLAB separately but as we were all teaching the same thing we though that we could really do something here to bring our teaching toether in some way.

We were teaching MATLAB through lectures and some computer lab-based exercises. If you aren’t a programmer or don’t like programming these lectures can be really hard to engage with. We can have live examples, movies etc. but it’s not hugely effective. Those lectures were ok but not very exciting. We wanted to change this a software tool you really only learn and learn through programming tend to learn through doing something as a hands on experience. So we saw this as an opportunity to really create engaging interactive material. We created a 5 credit module and use this as part of other modules. We wanted it to be online, self-paced, self-study model. Pass the buck to the students to take responsibility for working through the materials. It was very much targetted to those with no prior knowledge of MATLAB or with no previous programming experience. And we wanted them to learn to be competent using the most common features of MATLAB to solve engineering problems.

The tools we used were screencasts created with ScreenFlor and also a Samson Go Mic. And we have online course PDF assembled from LaTeX source – LaTeX is old tech but lets you output your material to all sorts of different formats.

The new material created includes a core comprehensive PDF with link sto lots of supporting material; self-test excercises; tightly intergrated screencasts linked from PDF – showing and describing basic MATLAB concepts and providing solutions to exercises.

You can have a look at the site here: http://www.eng.ed.ac.uk/teaching/courses/matlab

And I’ll give you a demo here of a screencast.

This course is being used in all of the different 2 year undergraduate courses across engineering. They develop numerical and programming skills and are being used really well. We have the courses as self-paced materials but they are well supported – my course we have 10 x 2 hour labs to work through problems etc.

Student feedback has been really good. We intentionally limited screen cast to 5 mins maximum so you go and do and practice as you work through the course. The course is available outwith the university. The screencasts are on YouTube. They’ve been live for 2 years so we’re starting to be able to analyse usage. We plan to publicise the course within UoE. And we want to use this course to develop similar material for other software tools that are part of degree programmes in engineering. And we want to look at other ways to make core materials available in more interactive ways – maybe with tools like iBooks for instance.

Acknoweledgement here must be given to the Edinburgh Fund Small Project Grant which helped fund this work, to Dr Craig Warren of course, to colleagues across Engineering and LTSTS for their support.

Q&A

Q1) You mentioned that MATLAB was really expensive and I was just wondering whether students have access to that software away from the lab as that can be really important for learners on self-paced courses.

A1) So the student version of MATLAB is available on all university machines across all labs etc. But students can also access MATLAB remotely via nx. It’s not as easy as it could be but they do have access whenever they need.

Q2) Any plans for transcripts for deaf students. And I think you could be making the course inaccessible to those students with those videos. And transcripts may help foreign language students.

A2) I haven’t thought about that particularly. I think that

Q3) You talked about analysing use -how are you looking at this and are you starting to look at student performance

A3) Craig is starting to do this. We have seen far better performance on final exams. But we need to do more.
Case study – Maps mashups as a teaching aid. Richard Rodger, HCA

I’m going to be talking about the AHRC funded Visualising Urban Geographies project. And I want you to imagine yourself as geographically challenged students here. We are great at the cultural aspect of history but I think we need to do far far more with geospatial perspectives on history.

Our objectives were to create a set of geo-referenced historical maps of Edinburgh, to reach a broader public, to develop open source software and avoid GIS…

And the contributions of my colleague Stuart Nichol and the staff at the National Library of Scotland’s Map Rooms – which is a fantastic resource – has been crucial here.

So we started with resource development. Maps were scanned and geo-referenced. One of the core issues to address was the thorny issues of boundaries and we wanted to make multiple types of boundaries available for all of these maps.

So maps have lots of historical information of course. I want to give you a few examples here. So looking at Edgar’s 1765 map we’ve given this topography – Edinburgh is certainly not flat! These maps have huge detail – looking at Edgar 1765 – so pick out something here, West Bow and Victoria Street perhaps, and I’ll show how this changes through 100 years of maps here. You can trace changes on the map and relate it to other documentary material and resources.

And then of course there is the chronological map – Chris Fleet of NLS is very proud of this form here, the map started in 1870 and gradually it grows to show the expansion and changes to the city over time, giving a 2D map a more dynamic feel that will appeal to a more general audience and their spatial awareness.

It’s probably evident here that our data is held in all sorts of different places… The Mapbuilder is all about address based history – census data, taxation records etc. So we used a geocoder to exploit these address based history. And we were plotting these points on a historical map – anyone can plot on a google map but it’s adding it to the historical map that adds important value here. So you can look, for instance, at clustering of addresses of solicitors in Edinburgh. When addresses have been geocoded they can be exported as a KML and viewed on a historical map. So the distribution of edinburgh solitors from 1861 superimposed on a relevant historical map. If we look at the same sort of group of solitors from 1811 we can see a move of location – that needs investigation. I think that’s very much about the change in practice in the law around this time, from lower new town to more central commercial areas.

Other ways to make this sort of data available to the wider public. So looking at James Colville, the Edinburgh Cooperative Building Company Ltd, the colonies and his walk to work in the 1870s – looking at this data you can see real social change over time.

Similarly you can look at James Steel, 1869 – Easter Dalry feu – and see the development of Haymarket over time.

Another tool we have here allows you to measure distance from the tool, you can see the trip of Colville’s walk around the colonies – the distance, the gradiant, the area of his travels. Very useful.

Of course addresses are one thing but also wanted to think about properties in Edinburgh. So boundaries and juristictions are very important here. So we’ve used our own data on properties here. One of the greatest contributions I think is in the definition of these maps – by creating shapefiles for these maps we can pour data into our thematic mapping engine. We can use those boundaries to express complexity in administration areas of the city. You have to imagine a mosaic of overlapping juristictions and some areas that are entirely dislocated from the rest of the city. For a historian to have that laid out so you can then plot data into those maps with the appropriate boundaries. Whilst we did it for Edinburgh it could be for any city really.

Q&A

Q1) How have students been finding these tools and what have they been doing with them?

A1) History in practice. Dissertations and advanced projects. 8 different types of case studies of that. Possibly talking to the converted here but they have responded really positively. And there is a community neighbourhood project in Wester Hailes that has found this work really useful and there has been lots of community engagement here. And there is also a project on mill sites in Perthshire that have also been using this data.

Case study – ‘Engage & Reveal’ project – Lindy Richardson, ECA

I’m going to talk to you about collaboration. The title should be “Reveal & Engage”. But after listening to everyone today I’m going to rename it “Engage, Reveal & Engage”. One of the challenges we have is about engaging our students. We artists can be quite separate in our practice until it comes to showing off – much of how artists use the web is about showing off our work!

So I want to start by talking about collaboration, working together to achieve something. Artists do get together whether virtually or in the flesh. There are loads of collaborative drawing projects line the Moly_x:an international moleskin sketchbook exchange – you can find this on Flickr. Artists draw and send on and new material is added. It’s a progressive linear collaboration. You contribute and it is physically exchanged and posted on the web. You haven’t actually interacted with the other artists though. It’s actually quite remote.

I set up a project in ECA to help students to understand how to physically interact with others’ work. Student one had two areas of pattern, student two had two different areas of patterns. And the idea was that they printed onto the print bed. Then for the second screen you had to print on the person before you’s work. They freaked out! The idea was about physically interacting and engaging with their fellow students’ work. We do lots of physical stuff in art which allows lots of handing on of work rather than collaboration – but you wouldn’t do that with one person researching something, another writing an essay, etc. So the idea here was that they engaged with and reflected on the process but still students in the printing project were mainly thinking separately…

So, I then set up an international collaboration project. This was British Council funded across cultures encourages collaboration through physical exchanges of materials from indigenous cultures. So we showed students Ayreshire needlework and Paisley paislies. Students responded to that original inspiration. And partner students in China did similarly, took inspiration and sent to us. And then the idea was to exchange these fabric pieces and we would add or subtract to these as part of the exchange. And what I expected was absolutely not what we got! So we sent a beautiful hand embroidered pieces and many of thenm came back quite crunchy, quite glued. Some of our students were quite upset by that.

So… Reveal and Engage… was a project at ECA to encourage our students to work together and to move out of their bubble, and to find synergies and common research areas. So we wanted them to contact each other, to engage in dialogue and to be collaborative. As artists and designers when we put up our materials online that’s our name, our work, and some text. So we did this event in the sculpture court. Each student got a 1.5 metre square space to pitch themselves. We taped out squares, they could pick their own area and sell themselves. You were speeddating each others work basically. Interestingly a few programme directors said no to this event. But when the event ran the students kept coming up and wanting to join in. I was a bit naughty and let them take cards and engage but not pitch themselves.

So the students required to provide a concise statement about your areas of interest and research focus. And examples of their own practice. It was really good for the students to think about that. So the students had a name plate with name, email, mobile number, website (where appropriate) and programme. In the second year we were asked for name badges though one student hated that. The students had to make 5 contacts. This was excruciating for some of them. It’s so easy to do this by phone, email. etc. To force them to do this physically was alien but was really really helpful. They had to make a minimu of 5 follow up meetings for discussion and potential development. Some were nervous about having too little interest, others were overwhelmed. Students quickly became aware of how effective and relevant their approaches were.

One of the most important things was to encourage students to enjoy the experience. to make contacts outside your area. And it will have huge benefits in the future. So here is an image of an ECA fashion show where students from textiles and fashions have worked together.

And then… ?

The challenges of working together became apparent. We set up staff surgery sessions to help with this and this also allowed you to work with both students at the same time, staff from outside your own areas. And that helped a lot as you can set up “collaborations” but as staff we often leave students to it and they need some of that support to make that work.

Some great collaborations took place – lots of fashion and textiles students working together, a great example of a performance costume and jewellery designer coming together. And the students really became aware of transferrable skills, particularly around communication, presenting themselves, being professional.

So how is the collaboration and the success of this venture assessed? We use the e.portal – we give feedback and the students have to also reflect on themselves and only then do they see both aspects of feedback in parallel, we use peer assessment, we had some sessions with the students themselves. But there are challenges here. Our students are very visual but they are not as keen to put their work into writing so this means we can have great projects and work from students but then their poorer performance on written aspects and reflection can effect their feedback or performance.

Next a project with concrete, glass and textiles in collaboration with Saint Peter and his collaborator as muse [I’m pretty sure that’s wrong, correction to come], an incredible concrete thing. And we will produce something amazing marking collaborative forward direction with the University which ECA is now part of.

And now, to lunch!

‘Enhancing the student experience- Representing, supporting and engaging with our 20,000 members’ – Rachel King, Martin Gribbon and Andew Burnie, Paul Horrocks (in absentia), EUSA. Abstract

Through this session we hope to give an overview of EUSA’s activities and to give an idea of the practices and activities that IT tools have been used in our work. We had hoped that Paul Horrocks, a third year maths student whose work you will see, would be able to speak today but he’s tied up with exams at the moment but we wanted to acknowledge him here.

Our visiiojn is to represent the student voice effectively to the university and beyond, to support student academic and social wellbeing, provide opportunities for participation and development through student activities, and things like discounted food and drink etc. We like to be a collaegue, a critical friend etc. to the University. All students of the University are automatically EUSA members unless they choose to opt out.

Representation is really important, we have to show we are listening and responding and to know how best to support students. Our general meetings have had poor attendance in the past, often not quorate in fact, so we have, for the first time, run a referendum online this year. And we had an average of 2000 votes on each item versus meetings that would have perhaps had 120 students so that’s been a success we think. We do also try to encourage students to engage – we can seem like a strange and perhaps irrelevant interruption in studies. So we do things like supporting candidates for the student elections and telling them lots of tips and hints about how to run a successful campaign… [we are now watching a video made for candidates on how to deal with nice and very difficult students you are trying to engage with – on YouTube as Election Advice – Door Knocking; Election Advice – Lecture Announcements].

Representation is most effective when student led so I am handing over to Andrew to talk about a very successful online petition that he led…

So last year registry informed us that they planned to reduce the month of exam schedules down to two weeks, were really angry and upset as that crammed near 10 exams into a very short period. I am lucky, I’m a representative for my class so I could email student colleagues and to let the university know. We were able to get it increased back to a three week period. But that wasn’t great. Many students hadn’t heard about this until my email, they didn’t feel informed or consulted by the University. So I set up an online petition – I wanted name, I wanted to know about course and school to see if this was just an issue for me and my colleagues. Then I wrote some code to turn the responses into a spreadsheet and look at the statistics. I thought that we would have loads of Science and Engineering responses but we actually had loads from HSS. And we had good responses from first and second year students. The most responses were from Informatics, not surprising as my school and they personally had an email from me. And I got a lot of students on joint degrees commenting as they felt that their dual schedules were not properly accomodated. I also had Google Analytics on the site to see activity. I shared the comments that had been placed. Those pages were used quite frequently and students were really thinking about whether to sign it. It was first just promoted on Facebook by me and by emails to my school. On the third day I send EUSA an email asking for it to go to class reps. When you target emails at engaged people like class reps. And it went pretty viral on Facebook. So we saw lots more responses. And Twitter was useful too but not many. Most students use Facebook, a lot don’t use Twitter – but computer scientists do. So, we had all these responses and, with EUSA’s support, we got the decision reversed by Registry. So why was it successful? It was student led and that’s crucial. Well it was a petition about only one issue, it was focused and clear, but you could personalise it with the comments box. People could participate in different ways – by signing the petition, by sharing on Facebook or even coming to the meeting with Registry, allowing that engagement on lots of levels was really important. Back to Rachel…

One of the other things we do in supporting our members is the services like the Advice place – we offer accomodation, health, etc. advice and that’s all online now. And we have been working on outreach with a roadshow around the university campuses to explain what the Advice Place is and does. And part of that is ensuring their Facebook Page and Twitter pages are up to date. The Advice Place is now in the Dome with a lovely new centre. You can see that they are sharing information on Twitter about student support funds, condom deliveries, where to find them, etc.

Societies are a really big part of students lives here, there are over 160 and we have been setting up a database of all societies so we can train treasurers etc. And you can now engage online, join online, pay your subs online etc. Each society has a page they can update and let people know what they’re doing.

We also have a volunteering centre in the Potterrow dome now and students can come in or look online for volunteering opportunities. The volunteering centre can easily add opportunities and students can easily sign up. I really encourage you to take a look and think about volunteering opportunities you may have – there is almost no part of the university that wouldn’t benefit from some volunteering effort.

We also have various peer support services – there is an International Buddy Project, and a project called Tandem – for people who want to practice speaking various languages, just talking not academic stuff, and that’s open to staff and students. We also have a scheme called Peer Proofreading and it followed a pilot in recognition of demand among non-native English speaking students for reliable sources of help in proofreading student work. The proofreading is purely about spelling and typos, not about academic content. So the student submits some work, it gets sent to a trained volunteer proof reader, and they send back feedback and the student can meet to discuss issues etc. And there is a community of proofreaders building up – a Facebook group for them, we’ve been surprised about how many students were keen to train as proofreaders actually.

And we have an initiative called Path Finder which is about choosing appropriate classes. At the moment students have the DRPS only, it’s hard to navigate that system. And it also helps highlight prerequisites etc. The idea is that students and staff have coauthored course descriptions. Students can see both sets of information and can see the consequences of that course in terms of course eligibility etc.

So far they have the DRPS data and BOXE reports and we hope that Paul, who has been designing this, will be able to work on this over the summer and will be able to get some financial report to do this. And now over to Martin…

I’m going to talk about a Facebook page we set up for Freshers Week. I don’t think this is neccassarily groundbreaking but I wanted to explain why we used that approach.

This was a Facebook Group, called Edinburgh University Freshers Week 2011. It has 2131 members. The first post by a student was on 17th June 2011 and actually we had 1000 members already at 17th July 2011. Students really want to engage early in the year.

So why do this? Well students want to come together before September. It allows students to ask questions they might otherwise keep to themselves or each try to ask individually. So it allows students to share experiences and expertise. However a downside there is that not all answers will be correct so we have to keep an eye and comment where there is an incorrect answer address that. We use social media a lot but this is by far the most successful social media activity we’ve done, it’s really enhanced the student experience.

So to look at Facebook here you’ll see a typical question which was about whether or not accommodation services should have been in touch, it gets 26 replies and they find solutions and approaches. And we have another student looking for others on his course. And others share where they will be, finding out who will be in your halls etc. You also see students setting up their own groups for various accommodation spaces etc.

We have set up the Edinburgh University Freshers Week  2012 group already. They have to ask to join. I’ll accept them only if they are real people. Businesses we decline. But we’d encourage any staff who want to to join this group and help students feel part of the University. Back to Rachel…

Future challenges for us certainly relate to engaging with our ever-growing and diverse student body, and ensuring there are inclusive and accessible learning and teaching – podcasts and WebCT being of concern at the moment.

Q&A

Q1) Are you thinking about having any special focus on distance students as we increasingly have more of these

A1) Rachel: We are talking with the University about this. There is alo an independent group called SPARKS that support student associations who are also looking at issues around distance students and how to support them so we are engaging.

A1) Martin: Obviously Facebook and Twitter etc. are globally available. We do also email about events on campus and campaign etc. to all students, distance or not.

Q2) DRPS is not only difficult for students, also very difficult for staff too. The Pathfinder system looks great but how do you plan to keep information current?

A2) One of the things that Paul has been so grateful is that the school felt that to set this up they needed the ability to maintain and keep this system up to date. And there would be a student coordinator every year and to add new data every year.

Q3) Are there plans to roll out Pathfinder to other schools?

A3) They would very much like to. They have tried to design it so that that’s possible.

Case Study – ‘The Idiots Guide to Collaborative work practises: Author, The students’ – Victoria Dishon, School of Engineering

I’ve been doing some work with our students on how they engage with their academic studies using technology. When I started doing that there were significant discussions in our school about what students do when they receive an assignment from us. I didn’t say what sort of technology I was looking at. I just asked students about technology.

Someone from another organisation said that “Engineering does a lot of group work, do you provide collaborative software? What do the students do when you give them an assignment?” and although I had some ideas I wasn’t actually sure.

So to see why we do so much group work we needed to look at our degree programmes. And all of these are accredited b the relevant professional body (e.g. Institute of Mechanical Engineers) and as a result the activities and assessment is very structured. So I’ll show you our mapping of specific learning outcomes to the degree programme from when we were most recently accredited in 2008. So if we have a look at these learning outcomes the ways in which these are phrased clearly requires you to talk to others, to exchange knowledge. And there is a requirement to manage and participate in shared experiences, in group experiences.And that is experience that you need to have for the real engineering world. And you need to understand customer relationships and peer collaboration.

So, I decided, going back to that original question, that I needed to speak to my colleagues about this and ask them that question. And my colleagues said: well it’s difficult to say; it depends on the assignment; I don’t really care as long as it comes in on time; well they must talk and meet. Some of my colleagues know really well what their students do. And it does depend on how much they are involved with a specific assignment. But generally it wasn’t really clear.

So I thought did I ask the right question? Did I ask the right people? So I decided that I better ask the students… So normally if you send out a student survey you will get 10-20 responses from super keen people. But I got 200 responses!

So I asked if they were using social media or file sharing sites for a class activity or an assignment. 94.5% said yes. I asked about what they were doing with them. There were tick boxes etc. and also loads of comments. I’m happy to share the detailed data here and will be doing that with my school of course. Students were using social media to discuss how they use class materials. Students upload tutorial sheets to Dropbox or Facebook and working their way through the tutorials. They write their workings out, take a picture, share it, correct each others work, explaining what they’ve done wrong. etc.

Students responded that they do this all the time, it’s not part of their assignments alone, it’s a core part of what they do. They do a lot of filesharing – for varying reasons. Mainly they do that because email isnt very efficient and don’t want stuff lost in the email boxes. And they are creating shared materials, not just assignments. So they had more in their toolbox than we thought. Not hugely surprising but the data is super helpful. We have decided we want to explore this more. I originally sent this survey to all our students. I followed up the survey asking if students wanted to come and chat and follow up on this. Seven students came to chat for half an hour, most went on for an hour and half in the end. All of those students were happy to work with the school to develop tools to help them with their learning. But that was a very self-selecting groups.

So some examples…

A 1st year Civil Engineering student has a laptop and smartphone. They are part of her life – not just her studies. She uses facebook every day mainly for social activity and she uses it as a lifeline to back home in Aberdeen. And that link was really important to making her feel her at home at university. She is also happy to join in work on there. There is a year 1 Civil Engineering FB group – they gossip, they share class info etc. Its set up by students themselves. She did join in a FB group for sharing documents and discussing an assignment. After that completed that group stopped. She uses dropbox as more reliable and harder to lose than a USB stick, She uses text messages to arrange personal and academic meetings. Not a big fan of email – it doesn’t seem personal enough for her. She’d prefer phone or Facebook.

A 3rd year Electrical and Mechanical Engineering student is a class rep and uses technology across personal and academic life. He use doodle to arrange meetings with email confirmations. He uses Dropbox to manage all files and to co-create academic materials. He doesn’t use his school file space at all. He also uses Dropbox to upload tutorial questions and past exam questions. And they use mobile phone or iPad camera to share notes etc – that was much more widespread than I realised. He regularly creates and managed FB groups, managing a University of Edinburgh Society page including advertising. And he uses FB to plug gaps in the knowledge between his two disciplines that are not fille sby the academic materials.

A 4th year Electrical and Informatics student. He considers himself to be completely digital, uses a laptop and mobile. He sees everything online as his front space to the world, that it is his personal brand, and how important he thinks that is. He uses Google docs, dropbox etc. And he’s created loads of spaces himself here.

So the commonalities here…

  • Ease of use
  • frequency of access – want everything when they need it and where they are
  • consideration of the tools that met the differing academic and social requirements
  • all demonstrated levels of understanding of privacy and security issues that suggested these had been considered before I spoke to them
  • all consider these tools to be essential to their acadenmic work set
  • the development of these strategies happen mostly without UoE staff directio or guidance, through peer discussion adn actions.
So… what do they do when we give them an assignement? They go out into the world and gather their digital office tools, on a bus, at the flat, in the library or in the computing labs,. They work together, they work separtely and they share. And they do a great job of this without us
Q&A
Q1) This sounds very positive but are there students who fall off the edge here..
A1) We had a real mixed set of responses. Some students were struggling and didn’t want technology forced on them. One of the students – the one that created the 3rd year mech eng FB group. There were 102 students in that coure, and 98 were in the group and the four students were being sent that material separately to keep them up to date.
Q2)
A2) We try to provide flexible students who have the knowledge to go out and find the materials needed for any task – whether an assignment or any other challenge. We are saying to them here is the way to identify the problem, find the right tools and find the solution. So it’s about giving them the skills and toolsets to address any number of issues.
Q3) By the time you’ve reacted to what students say they want they will have moved on… or by formalising that space they will move on because they don’t want you there surveilling.
A3) I would quite like to have shown you the FB groups students use so I asked for permission but they said no. It’s their space. If they want us to help they will ask that, or many will. My concern is about those who are not confident to do that. But us going into their spaces is an issue, it would put them off. It does raise real questions of how you support technology and what technology you support.
And after a short tea break it’s onto the next session…

Case study – Digital Feedback – Dr Jo-Anne Murray, CMVM Abstract

I’m going to talk about some work we’ve been doing out at the Vet School. Some of our students are engaged in online distance education courses so when I talk about digital feedback I’m talking about distance students in particular.

Interaction and communication is key to engaging students in online learning. This is really important when you look at the literature. So it’s about building a community learning experience. So we provide virtual lectures that can be accessed asynchronously. We have a virtual classroom that allows realtime interaction between students and the instructor. We also have text based syncronous discussion. And we have our own virtual campus in Second Life for students and interactions between students and instructors.

So we do provide an aspect of ongoing feedback. But when we come to assignment feedback this has typically been text based and has been delivered by email or through the VLE. Feedback enhances learning. Hand-written comments can be given weeks after submission. And when we think about students perspectives of feedback and the National Student Survey our students are not all that satisfied with the feedback particularly the timliness of feedback, the level of detail and the comprehension of that feedback.

We have lots of work on feedback for traditional students but there has been pretty limited work on the role feedback plays in distance education. Most studies have only examine text-based feedback. And can be limited due to lack of verbal and non-verbal information. Two important factors here are social presence and the sense of instructor interaction, things like friendliness, humour, ways to let the student know that the instructor is concerned and interested.

So thinking about digital technologies… we could use audio, screencasting, webcams. Although quite limited there are some programmes using digital feedback in HE. And this potentially gives us an opportunity to provide richer more detailed feedback, more comprehensive feedback, more timely feedback (but not taking more time to produce), nuances conveyed through tone of voice and use of learning. So hopefully enhancing the relevance and immediacy and usefulness of feedback.

So our case study here relates tio the MSc/Dip/Cert in Equine Science. This is delivered part time over 3 years. And it is delivered using a blend of online learning methods, through asynchronous and synchronous discussion. Students enjoy and thrive on quality unteractions and we really try to promote a sense of presence in the teaching. But feedback on assignments lacked that.

So we trialled feedback on dissertation proposal assignment. We used screencasting software called Jing to deliver this digital feedback – it’s a free to download software, it’s easy to use and it’s less time consuming than generic feedback sheets. So if I play you an example here you can talk through the feedback but also highlight relevant text and the key areas being discussed.

We asked students for feedback. All of the students reported digital feedback as helpful and preferable to written feedback. Felt it much more personal and helpful. Some also found seeing the text being discussed particularly helpful. In terms of improving the students work many of our students felt that it did improve their understanding of how to improve their work. All students said they would like this type of feedback again. Most found it was easy to access, we supported those who had more difficulties.

In terms of tutor feedback and how I found it it was very easy to use, it felt more personal to each student, probably included more detail – I was able to explain to a student how to improve her work far easier through talking than through writing it down. And less time consuming.

In conclusion I would say it’s a very valuable tool for providing feedback. It was a very positive experience for both tutor and students. And it really enhanced the quality and timeliness of feedback.

Q&A

Q1) You used JING, I suspect that it was stored to their own server… so who has that recording. Are there any issues with that?

A1) You have to watch out how you upload the recording to the servers but you can make it private to a specific URL. I have downloaded those files to our own servers as flash files so they could be deleted if we wanted them to be.

OER, OCW, MOOCs and beyond: open educational practice European research & Discussion – Professor Jeff Haywood,Vice Principal Knowledge Management and Chief Information Officer.

What I’m going to cover is to quickly look through OER, Open CourseWare, MOOCs etc. and educational practice, and to speak about what we do and don’t do here at the University of Edinburgh. And to end on a set of slides on economics.

If you want to read the best text on this it’s Taylor Walsh’s Unlocking the Gates (available free from Ithaca). So OER or Open Educational Resources… it is an area of real interest to those that are in th eeducation for development and developing nations etc. so organisations like UNESCO etc. have funded these. And funding from HEA, JISC, Jorum etc. have been important to the creation of OERs. And people like Open Nottingham and Leicester for instance have really stepped into this. We have tried before and may want to revisit.

What is OpenCourseWare is kind of a hodge podge of resources, many of incomplete. MITs set are rated quite highly but many of the resources that are referenced are not open, you cannot do the readings here. There are standards coming through here… there is development of ISO standards takiing place. And the Open University is one of those who have stepped into this domain and into free courses and the space of the MOOC. The thing to note here is the idea of fully automated courses. Standford’s first course here was CS 101 and if you see their FAQs you are entirely walled out of the institution and you get no credits for the course. MITx awards you a certificate but not tradable in the academic exchange sense. And ChangeMOOC which is about the converted learning with the converted.

I also wanted to talk about Coursera which is a Stanford spin off. There is a question here for Edinburgh… do we build our own. For us we think it makes sense to join in with an existing leader so we are talking with Stanford adn Coursera to open that up and looking for volunteers to build materials for that space.

And I wanted to move on to OEP – Open Educational Practices. The OPAL website (oer-quality.org) and this is about thinking about what you might do and what you might need. In terms of structure and need you will find some super thought provoking discussion in the documentation there. There is a classification scheme with a Low to High Learning Architecture scale and an OER Usage scales rom Low to High. So for an institution you can conciously think about conciously where you may want to be on that spectrum.

The OER University – also mentioned earlier – one of the crucial things here is that it is going to be cheaper for the learner – there is a note there for cheaper rates for assessment and credit. So it has the model of learners learning from OER, supported by volunteers, then open assessment from participating institutions, then grant credit for courses, and students are awarded diplomas or degrees [Jeff is showing a diagram adapted from Taylor 2007]. So we are seeing some decoupling of the institution here…

So I have been working on a project, OERtest, with Hamish McLeod, Sue Rigby and others, looking at how one can go about testing knowledge from OERs. And the guidelines we’ve been building up are concerned with entire course-modules offered as OER – the OER must be an entire course unit/module with full course materials, LOs, guides, assessment protocols, supporting documentation, equivelent to a unit/module offered in any HEI. It is intended for units which have been made available entirely online in one space. So it’s perhaps more like a MOOC.

We have several scenarios here. One is an OER traditional student who attends our institution, studies OER modules, request assessements, then use credits within the same institution. Many were nervous about that but seemed like the most straightforward idea.

The next scenario is an OER Erasmus which is the notion of a student completing a course from another university that is used at home institution – a Stanford CS module say as part of an Edinburgh programme.

Another scenario is an OER RPL is not a student at all, studies OER module from… whereever. And requests assessment from our university and uses credits from our university. This is very much like recognition of prior learning. It should work with relatively flexible institutions. But if you look across Europe some organisations regulate that sort of possibility and process and indeed regulate the cost for those sorts of work.

So the critical bit is you have to understand where in the qualification framework you will define yourself as an institution. You decide the level you want to work in. And how many credits you will assign to the work to be done. And then associated with that when you issue the marks you have to tell the people who are receiving those credits how the credits are acquired. And all of the students that graduate have a certificate explaining how the teaching took place.

So…. we took the proposal about teh University offering credits for other learning to the Senatus Academicus and actually they were quite unphased, as an institution we have real confidence in our ability to ensure that the right process takes place to ensure that we this properly if we decide to do it.

Economic Models..

OER

  • cost for HEI is the sum  of value of all inputs needed to design, develop, maintain course materials and delivery platform plus ensure visible.
  • return on investment – reputation, increased applications, signals quality, pro bono service, complies with current ethos
  • Cost for learner – not a lot of evidance that suggests that the value to the learner community is significant. Time to use, need to integrate into other learning.
  • ROI for learner – additional learning materials for course or pleasure. There is some evidence that users of OER are already students looking for additional materials.

OCW…

MOOC

Cost for HEI: again as per OER plus lite-touch tutoring/support and lite-assessment mechanism for certifiate (if offered) and “advertising” and keep pushing these courses.

ROI for HEI – all of the above but stronger, arena to “practice” OEP – and that’s a place to play that is separate from your main institutional practice

Cost for learner – as OCW but more structured/demanding – and that can mean more drop offs/out

ROI for learner – closer to the “educational real thing”, possible “proof” of competence as certificate – not a trivial thing in some parts of the world, It will cost you ££s for your certificate but that proof of competance is fairly inexpensive and may be well worth that investment.

So… ROIs on accreditation of OER-based learning (=MOOC+Assessment+Accreditation)

The Cost for HEI:

IF (unbundled curriculum = 0)

ELSE (course materials/tutoring = MOOC)

+ full assessment for credit + ward)

ROI for HEI = as MOOC + ££s for assessment/accreditation

Cost for learner = time, ££s

ROI for leaner = accreditation, certification and the pleasure of learning.

So… the cost implications of OER-based learning… Well…

  • Level 9 UoE course = 120/6 = 20 credits @ £9000/6 = £1500 if taken “normally”
  • Cost to assess learning achieved = 1 day work – £300/£600 (gross salary/fEc)
  • Cost to validate/award = 1 day work = £300/£600
  • Cost to learner for 20 credits = £600/£1200

So cost only low versus normal course. So if we want this to be cheaper then the assessment must be lighter, must be different from normal assessment. So needs to be lighter and automated. Which is great for competance based courses, not so much for qualitative courses.

And finally… we know what it costs to do it… what are we going to chage for it. The price can be set for any number of reasons…what can the market bear – which is important for most of our courses and why the business school charges twice as much and dentists can charge even more. And then there is the impact on current offerings of price differntials, small or large. Impact on reputation for quality. Loss-leader approach? Purposeful cross-subsidy for pro bono services etc…  How do you position your institution?

Conclusions – well there are spaces that you can experiment and play with in th ewider educational ecologies for traditional universities. Change in education has been slow, perhaps leading to complacency, or at least low agility. Awareness of why one is there is important for reputation and sustainability. There really is no such thing as a free lunch both for universities and learners.

Q&A

Q1) I don’t think I agree that the crunchy bit of the issue is the economic issue, I’m concerned that the MOOC movement isn’t going back to 1990s style automated learning and isn’t very pedagogically interesting.

A1) I agree to an extent if we’re talking about what MOOCs have largely done to date… a lot have come from computer science and engineering type disciplines where there are competencies that can be assessed in more automated ways. But you need to get the learning outcomes and credits right here and a trade off between the types of course you run in these spaces versus in person courses.

Q2) My issue is about what kind of learner we have in mind. Getting into the university has a bunch of pre-requistites, that’s partly about fairness of admission, partly to make sure students are able to complete and succeed in a course. If you create a course that anyone can take we might as well just open our doors.. that’s one of the implications I think. Isn’t there another or better way to tackle disadvantage of access. Should we provide a bridging process.

A2) I think those are legitimate concerns. But it depends on how you view entries to a MOOC. Participants only get assessment at the end of the programme, that’s one part of the answer, and the other is that this model is predicated on crowd-sourcing the answers to your questions. We shouldn’t assume we have to have the answers to everything. Maybe answers will come from knowledgeable others. Perhaps you moderate them, But it’s not your responsibility as an institution. It’s a different mindset to the one behind our closed gates.

Q2) So how do you manage those expectations?

A2) Well the key thing is it’s a different experience I’m talking about here.

And finally…

Dr Jessie Lee is closing the day for us with thank yous to the speakers, to the committee who have put today together, and information services and the Institute for Academic Development, and lets thank everyone who came along today as well.

And with that we are done here… lots of interesting stuff today and lots of thoughts and ideas to follow up on.

 

Delicious Share/Bookmark

Geoforum 2012: Programme & Booking Information

Digimap LogoBooking is now open for the EDINA Geoforum Event on the 20th of June at the National Railway Museum, York. All the details including a provisional programme are on the website: http://edina.ac.uk/events/geoforum2012/

National Railway MuseumThis free event will provide an important opportunity for anyone who supports geographic services, data and software to find out what EDINA is doing with Digimap and our other geo-services.

training courseThere will also be representatives from our data suppliers, partner institutions and some of the major GIS software vendors present. The event will a be a great way to get yourself up-to-date with what is happening with maps and geospatial data in Higher and Further Education.

Please register here: http://edina.ac.uk/events/geoforum2012/

We look forward to seeing you there.

Email Share

Presentations from the Geospatial in the Cultural Heritage Domain Event now online

Rebekkah Abraham of HistoryPinThis is a very brief post to let you know that you can now find the presentations from the Geospatial in the Cultural Heritage Domain – Past, Present & Future over on the event page we have set up.

The liveblog can be found here and is full of detailed notes from the presentations. We have also made the images of the event available over on Flickr, access all of the images from #geocult here.

Thank you to all of those who had already filled in our feedback survey on  the event. We’re really pleased that so many of you found the event useful and enjoyable. We would still love to hear from you if you attended the event in person OR if you took part via the liveblog or tweets. The survey can be found here: https://www.survey.ed.ac.uk/geocult

If you did attend or blog about or otherwise commented on the day do let us know and we’ll link to your post from the event page or our storify archive on the event.

And finally… we will be uploading audio of all of the talks and discussions along with video of some of the presentations very shortly. We’ll let you know when they go live!

 

Share/Bookmark

Liveblog: Geospatial in the Cultural Heritage Domain, Past, Present & Future

Today we are liveblogging from our one day event looking at the use of geospatial data and tools in the cultural heritage domain, taking place at Maughan Library, part of Kings College London. Find out more on our eventbrite page: http://geocult.eventbrite.com/

If you are following the event online please add your comment to this post or use the #geocult hashtag.

This is a liveblog so there may be typos, spelling issues and errors. Please do let us know if you spot a correction and we will be happy to update the post. 

Good morning! We are just checking in and having coffee here at the Weston Room of the Maughn Library but we’ll be updating this liveblog throughout the day – titles for the presentations are below and we’ll be filling in the blanks throughout the day.

Introduction

Stuart Dunn, from Kings College London is just introducing us to the day and welcoming us to our venue – the beautiful Weston Room at the Maughn Library.

James Reid, from GECO is going through the housekeeping and also introducing the rational for today’s events. In 2011 JISC ran a geospatial programme and as part of that they funded the GECO project to engage the community and reach out to those who may not normally be focused on geo. Those projects, 11 of them in total, cover a huge range of topics and you can read more about them on the GECO blog (that’s here). We will be liveblogging, tweeting, sharing the slides, videoing etc. and these materials will be available on the website. Many of you I know will have directly or indirectly received from JISC for projects in the past so hopefully you will all be familiar with JISC and what they do.

And now on with the presentations…

Michael Charno, ADS, Grey Literature at the ADS

I’m an application developer for the Archeology Data Service and I’m going to talk a bit about what we do.

We are a digital archive based at the University of York. We were part of the Arts and Humanities Data Service but that has been de-funded so now we sit alone specialising on Archeology data. And we do this in various ways. This includes disseminating data through our website.

The kind of maps we use in Archeology, there’s a long tradition of using maps to describe locations of events, for places, for communities etc. We use GIS quite a bit for research in the discipline. We have a big catalogue of finds and events – we have facets of What, Where and When. Where is specifically of interest to us. We mainly use maps to locate points on a map to locate times, events, finds etc. We also have context maps – just show people the location in which an item was found. We also have a “clicky map” but actually this is just a linked image of a map to allow drill down into the data.

One step up from that we use a lot of web maps, some people call them web GIS. You can view different layers, you can drill down, you can explore the features etc. But this is basic functionality – controlling layer view, panning, zooming etc. With all of these we provide the data to download and use in desktop GIS – and most people use the data this way, primarily I think this is because of usability.

And more recently we’ve been looking to do more with web maps. But we haven’t seen high use of these, people still tend to download data for desktop GIS if they are using it for their research. We have done a foolblown web GIS for the Framework Stansted project – there was a desktop standalone ESRI version. But they wanted a web version and we therefore had to replicate lots of that functionality which was quite a challenge. But again we haven’t seen huge usage, people mainly use the data on their desktop applications. I think this is mainly because of the speed of using this much data over the web. But the functionality is web.

We have found that simplicity is key. But we think that Web GIS isn’t realistic. We aren’t event sure Web Mapping is that realistic. If people are really going to use this data they are going to want to do this on their own machines. We thought these tools would be great for those without an ESRI licence, but there are now lots of good open source and free to use GIS – Quantum in particular – so we increasingly discourage people NOT to give us money to great web GIS. Instead we’re looking at an approach of GeoServer spatial database in Oracle to disseminate this data.

Issues facing the ADS now is the long term preservation of data and mapping (ARCIMS is no longer supported by ESRI for instance); usability – we can upgrade these interfaces but making changes also changes the usability, can be frustrating for users; proprietary technology – concern is around potential lock in of data so we are moving to make sure our data is not logged in; licensing – this is a can of worms, talk to Stuart Jeffrey at the ADS if you want to know more about our conerns here; Data – actually we get a lot of poor quality or inconsistent data and that

ARENA project – a portal to search multiple datasets. This was a What Where When key terms. The what was fine – we used standard method here. When was challenging was OK. But Where was a bit of an issue, we used a  box to select areas. We tried the same interface for the TAG – Transatlantic Archaeology Gateway service – but this interface really didn’t work for North America. So we wanted to be able to search via multiple boxes so we want to do this in the future

ArcheoTools – we wanted to analyse texts including grey literature. There was spatial information we could easily pull out and plot. Modern texts OK but older texts – such as those of the Society of Antiquaries of Scotland – were more challenging. The locations here include red herrings – references to similar areas etc.  We partnered with the Computer Science Department at the University of Sheffield for the text mining. Using KT/AT extension and CDP matching we had about 85% matches on the grey literature. We also tried EDINA’s GeoCossWalk was even better accuracy – only 30 unresolved place names. I think we didn’t use the latter in the end because of disambiguation issues – a challenge in any work of this type. For instance when we look at our own data it’s hard to disambiguate Tower Hamlets from any Towers in any Hamlets…

Going back into our catalogue Arcsearch – you can drill through area sizes – we were able to put this grey literature into the system at the appropriate level. We also have new grey literature being added all the time, already marked up. So this lets us run a spatial search of grey literature in any area.

What we saw when we rolled out the ability to search grey literature by location – we saw a spike in the download in grey literature reports. Although Google was certainly trawling us and that will throw the figures. But definitely useful for our users too and a spike in their use as well.

Again looking at ArcSearch. One of the issues we have is the quality of the records. We have over 1 million records. We ingest new records from many suppliers – AH, county councils etc. and add those to our database. We actually ran a massive query over all of these records to build out own facet tree to explore records in more depth. We want to capture the information as added but also connect it to the correct county/parish/district layout appropriate. We also have historical counties – you can search for it but it can be confusing, for instance Avon doesn’t exist as a county anymore but you will find data for it.

The other issue we fine is that the specific coordinates can end up with points being plotted in the wrong county because the point is on the border. Another example was that we had a record with a coordinate for Devon but it had an extra “0″ and ended up plotted off the coast of Scotland!

I know that Stuart will be taloking about DEEP later which is great, we would love to have a service to resolve placenames for our future NLP so that we can handle historical placenames, spatial queries and historic boundaries. It would be nice to know we remain up to date/appropriate to date as boundaries change regularly.

The future direction that we are going in is WMS publishing and consumption. For instance we are doing this for the Heritage Gateway. Here I have an image of Milton Keynes – not sure if those dots around are errors or valid. We are putting WMS out there but not sure anyone’s ready to consume that yet. We also want to consume/ingest data via WMS to enrich our dataset, and to reshare that of course.

And finally we are embarking on a Linked Data project. We currently have data on excavations as Linked Data but we hope to do more with spatial entities and Linked Data and GeoSPARQL type queries. Not quite sure what we want to do with that because this is all new to us right now.

Find out more:

  • http://archaelogydataservice.ac.uk/
  • @ADS_Update
  • @ADS_Chatter
Q&A
Q1: It seems like your user community is quite heterogenous – have you done any persona work on those users? And are there some users who are more nieve?
A1: We’ve just started to do this more seriously. Registration and analytics let us find out more. Most are academics, some are commercial entities but the largest group are academics. I think both groups are equally nieve actually.
Q2: Why ORacle?
A2: Well the University has a license for it. We would probably use PostGRES if we were selecting from scratch.

Claire Grover, University of Edinburgh, Trading Consequences

This is a new project funded under the Digging Into Data programme. Partners in this are the University of Edinburgh Informatics Department, EDINA, York University in Canada and University of St Andrews.

The basic idea is to look at the 19th century trading period and commodity trading at that time, specifically for economic and environmental historical research. They are interested in investigating that increase in trade at this time and the hope is to help researchers in this work, to discover novel patterns and explore new hypothesis.

So if we look at a typical map a historian would be interested in drawing. So if we look at Cinchona, it is the plant from which Quinine derives and it grows in South America but they began to grow it in India to meet demand at the time. Similarly we can look at another historians map of the global supply routes of West Ham factories. So we want to enable this sort of exploration across a much larger set of data than the researchers could look at themselves.

We are using a variety of data sources, with a focus on Canadian natural resource flows to test reliability and efficacy of our approach and using digitised documents around trading within the British Empire. We will be text mining these and we will populate a georeferenced database hosted by EDINA, and with St Andrews building the interface.

Text mining wise we will be using the Edinburgh GeoParser which we have developed with EDINA and which are also used in the Unlock Text service. It conducts Named entity recognition – place names and other entities – and we will be adding commodities for Trading Consequences – and then there is a Gazetter look up using Unlock, Geonames, and Pleides+ which has been developed as part of the PELAGIOS project. The final stage is georesolution which selects the most likely interpretation of place names in context.

So to give you some visuals here is some text from Wikipedia on the Battle of Borosa (a random example) as run through the Edinburgh GeoParser. You can see the named entity recognition output colour coded here. And we can also look at the geo output – both the point it has determined to be most accurate and the other possible candidates.

So what exactly are we digging for in Trading Consequences? Well we want to find instances of where text of trade-related relationships between commodity entities, location entities, and date entities – what was imported/exported from where and when. Ideally we also want things like organisations, quantities and sums of money as part of this. And ultimately the historians are keen to find information on environmental impact of that trade as well.

Our sources are OCR textual data from digitised datasets. We are taking pretty much anything relevant but our primary data sets are the House of Commons Parliamentary Papers, Canadiana.org and the Foreign and Commonwealth office records at JTOR. Our research partners are also identifying key sources for inclusion.

So next I am going to show you some very very early work from this project. So we’ve down some initial explorations of two kinds of data using our existing text mining toolset – primarily for commodity terms to assist in the creation of ontological resources – we want to build a commodity ontology. And we’ve also looked at sample texts from our three main datasets. So we we have started with WordNet as a basic commity ontology to use as a starting point. So in this image we have locations marked up in purple, commodities in green. We’ve run this on some Canadiana data and also on HCCP as well.

So from our limited starting sample we can see the most frequent location-commodity pairs. The locations look plausible on the whole. The commodities look OK but “Queen” appears there – she’s obviously not a commodity. Similarly “possum” and “air” but that gives you a sense of what we are doing and the issues we are hoping to solve.

The issues and challenges here: we want to transform historias’ understanding but our choie of sources may be biased just by what we include and what is available. The text mining won’t be completely accurate – will there be enough redundancy in the data to balance this? And we have specific text mining isues: loq level text quality issues, isolating referencing issues, French language issues etc. And we have some georeferencing issues.

So looking at a sample of data from Canadiana we can see the OCR quality challenges – we can deal with consistent issues – ‘”f” standing in for “ss” for instance – but can’t fix gobbledegook. And tables can be a real nightmare in OCR so issues there.

Georeferencing wise we will be using GeoNames as a gazeteer as it’s global but some place names or their spellings have changes – is there an alternative? We also have to segment texts into appropriate units – some data is provided as one enormous OCR text, some is page by page. Georesolution assumes each text is a coherant hole and each place name contributes to the disambiguation context for all of the others. And the other issue we have is the heuristics of geoparsing. For modern texts population information can be useful for disambiguation. But that could work quite badly/misleadingly if applying this to 19th Century texts – we need to think about that. And we also need to think about coastal/port records perhaps being weighted more highly than inland ones – but how do you know that a place is/was a port. We’ve gone someway towards that as James has located a list of historical ports with georeferences but we need to load that in to see how that works as part of the heuristics.

Humphrey Southall, University of Portsmouth, OldMapsonline.org

I wanted to do something a big controversial. So firstly how many of us have a background in academic discipline of geography? [it’s about five of those in the room]. A lot of what’s going on is actually about place, about human geography. I think GIS training warps the mind so I wanted to suggest this issue of Space vs. Place.

There is a growing movement towards using maps both for resource discovery and visualisation. But it does lead to inappropriate use of off-the-shelf GIS solutions. There are 3 big problems: Map based interfaces are almost entirely impenitrable to search engines but they are how most people use information and discover things – the interface is a barrier, but that doesn’t mean scrapping them; mapping can force us into unjustificable certainty about historical locations; and this isn’t actually how most people think about the world – people are confused by maps, they can handle textual meaning of place.

So, looking at locational uncertainty in the past. Cultural Heritage information does not include co-ordinates. They have geographical names. Even old maps are highly problematic as a source of coordinates. But converting toponyms to coordinates gets more problematic as we move back in time. 19th and 20th century parishes have well-defined boundaries that are well-mapped – but still expensive to computerise, my Old Maps project has just spent £1million doing this. Early modern parishes had clear boundaires but few maps so we may know only the location of he admin center, earlier than that and things become much more fuzzy.

If we look a county records th

Geographical imprecision in the 1801 census – it’s a muddle, it’s full of footnotes.

Geo-spatial versus Geo-semantic approaches. GIS/Geo-spatial approaches privilege coordinated is all about treating everything as attributes of coordinate data. By comparison Geo-semantic approaches, descriptive of place, seem

Examples of sites with inappropriate use of geo-spatial technology: Scotland’s Places: has a search box for coordinates – who on earth does that! But you can enter placenames. Immediately we get problems, 6 matches and first 2 are for small Glasgow – city only, and then there are 4 for Glasgow as a wider area. this is confusing for the user, which do we pick? Once we pick the city we get a list of parishes which is confusing too, and we encounter an enormous results set, and most of what we get isn’t information about Glasgow but for specific features near Glasgow. This is because at the heart this system has no sense of place – it just finds items geolocated near features of Glasgow. I could show the same for plenty of other websites.

For an example of an appropriate sense of space – HistoryPin, who are speaking later, as images have an inherent sense of location. Another example is Old Maps Online.

Geo-semantics – geography as represented as a formal set of words. This is about expressing geographic traits formally – IsNear, IsWithin, IsAdministrativelyPartOf, Adjoins. Clearly GIS can express some of these relationships more fully  – but only sometimes and assuming we have the information we need there.

One problem we had on the Vision of Britain project was how to digitise this material. We really had to deliver to the National Archives. Frederick Youngs’ Guide to the Local Administrative Units of England – no maps, no coordinates, two volumes – is a fantastic source of geographical information. This is used in Old Maps Online. There is a complex relationship. Using visualisation software on the structure we built from Youngs you can find out huge amounts about that place. One point to note is that this is not simply one academic project. I’ve shown you some of the data structure of the project but it’s not about just one website. But we do have huge amounts of traffic – up at 140k ish unique users a month. So lets do a search for a village in Britain – suggestion from the crowd is “Flushing” apparently… Google brings back Vision of Britain near the top of the list for any search of “History of… ” for any village in Britain. I’m aware of very few cultural heritage sector websites that do this. We did this partly by having a very clear very semantically structured information behind the site there and available for crawling. We will be relaunching the site with some geospatial aspects added, but we also want to make our geosemantic information more available for searchers. We use a simple GeoParser service, mainly for OldMapsOnline and the British Library. We will be making that public. And we rank that based on frequency of place name, a very different approach to that outlines.

Q&A

Q1) I suspect that the reason Flushing didn’t get you to the top of the list is because the word has another meaning. What happens with somewhere like Oxford where there are many places with the same name?

A1) Well it’s why I usually include a county in the search – also likely to help with Oxford but of course for bigger places we have much more competition in Google. I think the trick here is words – Vision of Britain includes 10 million words of text.

Q2) Is this data available as an API? Or are all maps rastorised?

A2) Most of our boundaries are from UK Borders free facility for UK HE/FE. We have historic information. In terms of API we are looking at this. JISC have been funding us reasonably well but I’m not entirely happy with the types of projects that they choose to fund. We have put that simple GeoCoder live as we needed it. Some sort of reverse geocoder wasn’t too hard.

James: we support an internal WFS of all of the UK Borders data and data from Humphrey

Comment: We’ve used OS data from EDINA for our data. I was hoping there was something like that we could use over the web

James: I think it’s very much about licencing in terms of the OS data, for Humphrey’s data it’s up to him.

Humphrey: We haven’t been funded as a service but as a series of digitisation projects and similar, we make our money through advertising and it’s unclear to me how you make money through advertising for a Web service.

Stuart Nicol, University of Edinburgh, Visualising Urban Geographies

I’m going to be talking about the Visualising Urban Geographies project which was a collaborative project between the University of Edinburgh and the National Library of Scotland funded by the AHRC.

The purpose of the project was to create a set of geo-referenced historical maps of Edinburgh for student learning purposes, to reach a broader public through the NLS website, to develop tools for working with abd visualising research on maps and to trial a number of tools and technologies that could be used in the future.

The outputs were 25 georeferenced maps of Edinburgh from 1765-1950 (as WMS, TMS and downloadable JPG, JGW) as well as a suite of digitised boundary polygons (ShapeFiles and KML), We have used various individual maps as exemplars to see what might be possible – 3D boundaries etc. We also documented our workflows. and finally we created a series of web tools around this data.

The web tools are about quick wins for non GIS specialist – ways to find patterns and ideas to build on, not mission critical systems. To do this quickly and easily we inevitably have a heavy reliance on Google. A note on Address based history – researchers typically gather a lot of geographic data as addresses, as text. And it can be hard to visualise that data geographically so anything that helps here is useful.

So looking at our website – this is built on XMaps with Google Maps API and tile map service for historic maps. You can view/turn on/off various layers, you can access a variety of tools and basemaps. This includes usual Gooogle Map layers, also the Microsoft Virtual Earth resources as well as OpenStreetMap. So you can view any of these maps over any of these layers. You can also add user generated data for this – you just need xml or kml or rss link to use in the tool. The Google Street View data can be very useful as many buildings in Edinburgh are still there. We have a toolbox that lets you access a variety of tools to use various aspects of the map, again just using the Google Address API. We use the Elevation API to get a sense of altitude. We’ve also been looking at the AddressingHistory API – geocoding historical addresses. So here I’m looking in the 1865 directory for bakers. And I can plot those on the map.

One of the main tools we wanted to provide was a geocode tool for their research. Our researchers have this long list of addresses from different sources. So they simply copy from their spreadsheet into the input field in our tool, the API will look for locations, and you get a list and also get a rough plot for those addresses.  And we’ve built in the ability to customise that interface. This uses Google Spreadsheets and your own account. So you can create your own sets of maps. To edit the map we have the same kind of interface on the web. You can also save information back to your own Google account. And we also have an Add NLS data facility – using already digitised and georeferenced maps from the NLS collections.

You can publish this data via the spreadsheets interface and that gives you a URL that you can share which takes you to the tool.

So we went to a very lightweight mashup idea. We use Google Maps, Geocoding, Elevation, Visualisation, Docs & Spreadsheets, Yahoo geocoding, NLS Historic Mapping, AddressingHistory as our APIs – a real range combined here.

But there are some issues around sustainability and licensing here. We use Google Maps API V2 and that’s being depreciated. What are the issues related to batch geocoding rom Google? Google did stop BatchGeo.com from sharing batch coded data as it broke third party terms so that’s a concern. There is a real lack of control over changes to APIs – the customise option broke a while ago because the Google Spreadsheet API changed. It was easy to fix but it took a while to be reported, you don’t get notified. Should we use HTTP or API? Some of the maps we use are sitting on a plain HTTP server – that means anyone can access it, speed can be variable if heavily used. The NLS have an API which forces correct attribution but that would take a lot of work to put in place. And also TMS of WMS? We have used TMS but we know that WMS is more flexible, more compliant.

And we face issues around resources and skills. We can forget that we have benefitted from our partnership with NLS with access to their collection, skills, infrastructure and all those maps. One of our more ambitious aims was that our own workflow might help other researchers do the same thing in other locations. But this isn’t a easy as hoped. We have a colleague in Liverpool, and a colleague in Leicester both using the tools but both constrained by access to historical maps in usable formats. And they don’t have skills to deal with that themselves. Who should be taking the lead here? National libraries? Researchers?

In terms of what we have learned in the project we have found it useful to engage with the Google tools and APIs as it allowed us to build functional tools very quickly but aware that there are big drawbacks here and limitations. But we have successfull engaged researchers and the wider community – local history groups, secondary schools, local history groups etc.

Jamie McLauglin, University of Sheffield, Locating Londons Past

Locating London’s Past was a six month JISC project taking a 1746 map and georeferencing it and visualising data from textual sources and data from the period on this map. We also ran a gazeteer derived from the 1746 map, and it was also rectorised for us as well so you can view all the street networks etc. Our data sources contained textual descriptions of places and we regularised these for spellings and compound names. And then these were georeferenced to show on our map.

What’s interesting exploring the data is to search, say, or Murders – Drury Lane has a lot which is perhaps not surprising. But murders

We used Google Maps as it was so well known, it seemed like the default choice. We didn’t think too deeply about that. It does do polygons and custom markers. And it does let you do basic GIS – you can measure distance, polygons etc. And it’s well known as the Google conventions. Like the previous presention this was a “light weight mash ups” approach. What can’t be underestimated is the usefulness of the user community – huge group to ask if you have a question. The major downside of course is the usage limit – 25k uploads a day for free, after that you have to pay. These new terms came in just at the end of the project. It’s a reasonable thing and you have to have 90 days at that level so spikes are OK. But it’s expensive if you go over your limit: $4 for additional 1000 loads. At really high levels it’s $8 for additional 1000 loads. there is a very vague/sketchy educational programme which we’d hope we’d quality for.

So retrospectively we’ve looked at alternatives. OpenLayers – I think Vision of Britain uses this – and it uses OGC standards, you can loads rastor or vector layers from anywhere and you’re not trapped into a single projection. PolyMaps is another alternative I looked at, it uses vector layers and gets the browser to do all of the work. PolyMaps all in a 30k JavaScript file. Mind you we always envisioned using a rastor version. But I think we could make a very cool vectorised version for Locating London’s Past – the 1746 map is pretty but not essential. And Leaflet is also available, it’s small and sweet and pretty and genuinely open source as well.

When you push more and more data over the web you are forcing the users browser to do a lot of work. Locating London’s Past relies on JavaScript and the users browser but it can be slow or unreliable dependent on your connection. Another challenge is geocoding textual information or sources. Firstly placenames are not often unique. In London there are lots of roads with the same name – there are 5 Dean Streets. And variant spellings aren’t reliable – they can be entirely different places. In 1746 there are two Fleet Streets and finding a tiny alley off one of them is real challenge. We didn’t leave anything like enough time to geocode the data. Our machine approach was good but our researchers only really wanted 100% accuracy so you need humans disambiguating your geocoding.

We should also have thought further about exports and citations. The standard way to store and cite a website is a bookmark. It’s non trivial to store Web GIS data as it’s so huge. If you work purely in JavaScript you’ll find that difficult without hacks or HTML5. And you can have data that looks clean. Here some data on plague victims has been moderated. But the boundary set we have extends into the river – it isn’t accurate and that impacts on population, land data, etc. Problems don’t become apparent from the text.

The three big lessons from us: Keep it simple – we tried to do too much, too many data sets for the time, when the design was kept simple it was successful; garbage in = garbage out – geocoding isn’t magical! They are much much stupider than a human no matter how good they are; Use open platforms – the API terms are worrying, we should have used Open solutions.

James: Perhaps the Google bubble has burst – even FourSquare has moved to other mapping. APIs can be changed whenever the provider likes. And I should add that EDINA runs an open web service, OpenStream, that will let you access contemporary mapping information.

Ashley Dhanani and David Jeevendrampillai,UCL,“Classifying historical business directory data: issues of translation between geographical and ethnographic contributions to a community PPGIS project�

We are trying to focus on the the place of suburbs and the link between suburbs and socio economic change. Why are suburbs important? Well around 84% of British people live in suburbs, we’ve seen the London Mayoral election focusing on suburbs and the Queen’s spending some of her jubilee in the suburbs.

We see small relationships, small changes in functionality etc. in suburbs that can easily be missed. We will talk about material cultural heritage – shapes of houses, directions of roads, paths and routes taken etc. We will relate the very material heritage to socio economic use of buildings/places over time. And we look at meaning – what does that mean socially – to use the post offices at different times in the last 200 years perhaps.

We wanted to do various analyses here. a network analysis to consider the accessibility of particular spaces. And the changes in how people live in these spaces. So if we look at Kingston in a rather manual mapping process looking at network structure. Here we can see in 1875 what is the core area, what was it like to be in these spaces? Again we can see change over time. And we can see the relationality to the rest of the city. This is just part of the picture of these places through time. So from a material perspective we can see how the buildings change – from large semi detached houses to small terraced rows for instance. So we want to bring these information together and analyse them. So here we need to turn these historic structures into something more than a picture, to be able to look at our

We are using software – cheap for academic use – that allows you to batch proof TIFF files and do 80-90% of the work on a good underlying map. You can then really start doing statistics and exploring the questions etc. You can basically make MasterMap for historic periods!

Back to David. We also wanted to relate these networks, roads and buildings to the actual use, what was going on in these buildings at the time. So we just talk the Business Directory Information and georeferenced it to provide points on the map. We need to categorise the types of use in the business information. So we get these rather Damien Hirst style pictures – coloured dots on the road. We had a bit of a debate, me being an anthropologist, of probloematising those categorisations… what is a Post Office? Is it a Financial Service? Is it a Depot? Is it Retail? Is it a Community Service? And the answer obviously is what do you want to get from this data, why are you looking at it in the first place.

So we wanted to know what these elements of the build environment meant. What does a relocated post office mean socially? We wanted to add another layer of information. Archives, memories, photos etc. We are taking the archive and making it digital. But I want to talk a bit about limitations here. Trying to understand a place through point information, looking at a top down map doesn’t include that ephemeral information – the smell of a building perhaps. What we’re doing in this project is bringing in lots of academics from different disciplines and you get very different use of the same data sources. What we’ve found, the gaps that we’ve found between understandings of the data have been very productive in terms of understanding our data, place, and what place means for policy based outcomes. And rather than come to a coherant sense of place, actually the gaps, the debates are very productive in themselves. We are one year in – we have 5 years funding in total – but those gaps have been the most interesting stuff so far.

And this kicking up of dust in the archives has only happened since we’ve been able to turn materials into digital form – they can be digitised, layered up, used together. Whilst this is very productive we will have gaps and slippages of categorisation and highlight our ways of understanding what goes on in place.

Q&A

Q1) What software did you use for this project?

A1) RX Spotlight [not sure I’ve got that down right – comment below to correct!]

Q2) Interesting to hear about the issues with Google Maps – are any of the Open Source, truly free services, better with mobile?

A2) There is an expectation on mobile phones – there’s a project we’re working on with LSE on the Charles Booth property maps – which is hampered by the available zoom levels. There are workarounds, other data providers are part of this option. You have CloudMade based on OpenStreetMap data. We have OpenStream for HE projects.

Humphrey: We planned to use Google geocoder for Old Maps Online but they changed the terms and we expected high usage. We went for OpenStreetMap as truly free, but it’s problematic. And so we have implemented our own API from VisionOfBritain. We do use Google Basic and again we are concerned about going over our limits. Using a geocoder does let you mark up data for use with other maps. But if you are using linked data and identifyers and it was Google or similar providing that it would be very concerning.

James: Especially with mobile phones there is a presumption of very large scale. We were involved in the Walking Through Time project and the community wanted Google – the zoom levels killed it. There are issues around technical implementations. Think large scale for mobile. I do know that Google have been thinking of georeferencing as context for other information. Place is something else but implies some geography.

Comment: Leaflet works well on mobile.

James: We will come back to this later – discussing what we are using, what we need, etc.

And now for lunch… we’ll be back soon!

And we’re back…

Chris Fleet, National Library of Scotland, Developments at the NLS

I’m going to be talking about our historic mapping API which we launched about 2 years ago. This project was very much the brain child of Petr Pridal who now has this company Klokan Technologies. The API is very much a web mapping service.

So to start with let me tell you a bit more about the National Library of Scotland. We aim to make our collections available and with maps most of our collection is Scottish but we also have international maps in the collection. There are 46k maps as ungeoreferenced images with zoomable viewer. The geo website offers access via georeference search methods. We’ve been a fairly low budget organisation so we’ve been involved in lots of joint projects to fund digitisation. And there is even less funding for georeferencing so we have joined up with specific projects to enable this. For instance we have digitised and georeferenced the Roy Military survey map of late 18th century, town plans of Ordnance Survey, aerial photographs of the 1940s, and Bartholomew mapping – we are fortunate to have a very large collection of these. And we’ve been involved in various mashup projects including providing maps for the Gazeteer project for Scotland.

So in early 2000 Petr had this idea about providing a web mapping service. There were several maps already georeferenced – 1:1 million of the UK from 1933 and we had several other maps at greater detail from similar areas. Although we use open source GIS and Cube GIS we have found that ArcGIS is much easier for georeferencing, adding lots of control points, and dynamic visualising of georeferenced maps. We used Petrs MapTiler (this has now been completely rewritten in C++ and is available commercially and runs much faster) and TileServer. These tools allow you to provide coordinates that allow you to spherisize your map for use with tools like Google Maps or Bing.

We launched in May 2010 with examples for how to use the maps in other places and contexts. We put the maps out under Creative Commons Attribution license – more liberal than the NLS normally licences content.

Usage to date took a while to take off, most of our users are from a UK domain – unlike most of our maps collection – and most of our use has been in the last year or so. I’ve divided usage into several categories – recreation, local history, rail history, education etc.

Bill Chadwick run the Where’s the Path website and they use a lot of data – they display our historic maps and other users used the link through the site for other big websites and there’s where lots of the hits have come from. A lot of our phone use has been for leisure – with the maps as a layer in another tool for instance.

Looking at how our maps have been used the variety has been enormous – leisure walkers, cyclers, off-road driving, geocaching as well! We also have lots of photographers using our maps. And metal detecting – I had underestimated just how big a users they would be, including the Portable Antiquities Scheme website. And there are many family history users of these maps – for instance the Borders Family History society links to resources for each county in Scotland. There is also the area of specialist history: SecretWikiScotland – security and military sites; airfield information exchange; Windmill World; steamtrain history sites etc. And another specialist area: SABRE – the group for road history, if you’ve ever wondered about the history of the B347 say, they are the group for you. They have a nice web map service to ingest multiple maps including our maps API. And finally Stravaiging is to meandor and you’ll find our maps there too.

Education was quite a small user of our maps. EDINA and others already cater to this group. But there was a site called Juicy Geography aimed at secondary school children that uses them. And the Carmichael Watson project, based at Edinburgh University, shows georeferenced transcripts against our historic maps.

We know OpenStreetMap has been using out maps though they don’t show up in our usage data.Through them we’ve connected to a developer in Ireland. This is one of those examples where sharing resources and expertise has been useful for our own benefit and that of the OpenStreetMap Ireland coverage.

The NLS is also now using Geo MappinG Service and GeoReferencing, etc. And we now have a mosaic viewer for these maps. Through the API and other work we’ve been able to develop a lot of map, including 10 inch to a mile series for the UK. And we are working on the 1:25k maps. We hope to add these to our API in due course.

In terms of sustainability the NLS has and continues to support the API. We are looking at usage logging for large/commercial users – some users are huge consumers so perhaps we can licence these types of use. Ads perhaps?

Top Tips? Well firstly don’t underestimate how large and diverse the “geo” community is. Second don’t overestinate the technical competance of the community – it is very variable. And finally don’t underestimate the time required the administer and sustain the application properly – we could have worked much harder to get attention through blogs, tweets, etc. but it requires more serious time than we’ve had.

Q&A

Q1) One of your biggest users are outside recreation – why using historic mapping?

A1) I think generally they are using both, using historic maps as an option. But there could be something cleverer going on to avoid API limitations. If you are interested in walking or cycling you can get more from the historic maps from 60 years ago than from modern maps.

Rebekkah Abraham, We Are What We Do, HistoryPin

I am the content manager for HistoryPin. HistoryPin, as I’m sure you will be aware is to let people to add materials to the map. It was developed by We Are What We Do and we specialise on projects that have real positive social impact. The driver was the growing gap between different generations. Photographs can be magical for understanding and communicating between generations. A photograph is also a piece of recorded history, rich in stories and photographs – they belong to  a particular place at a particular time. If you then add time you create really interesting layers and perspectives of the past. And you can add the present as an additional layer – allowing compelling comparisons of the past and the present.

So historypin.com is the hub for a set of tools for sharing historical content in interesting ways and engage people with it. It’s based on Google Maps, you can search by place and explore by time. You can add stories, material, appropriate copyright information etc. and the site is global. We have around 80k pieces of content and are working with various archives such as UK National Archives, National Heritage etc. And we are also starting to archive the present as well.

Photographs can be combined with audio and video – you can pin in events, audio recordings, oral history, etc. We’re also thinking about documents, text, etc. and how this can be added to records. You can also curate, you can create talks through materials and tour others through. And here you can see the mapping and timeline tools can be very nice here. Again you can include audio as well as images and video.

We also have a smartphone app for iPhone, Android and Windows and that lets you go into the streetview to engage with history, you can add images and memories to a place you currently are. And you can fade between present camera view and historic photographs, and you can choose to capture a modern version of that area – great if an area lacks street view but you are also archiving the present as well.

At the end of the March we will launch a project called HistoryPin Channels – this will let you customise your profile much more, to create collections and tools, another way to explore the materials.  And to see stories on your content. This will also work with the smartphone app and be embeddable on your own website.

And we want to open HistoryPin to the crowd, to add tags, correct location, etc. so that people can enhance HistoryPin. You could have challenges and mysteries – to identify people in an image, find a building etc. Ideas to start conversations. A few big questions for us: how do you deal with objects from multiple places and multiple times; and how do you deal with precision

Pinning Reading’s History – we partnered with Reading Museum to create a hub and an exhibition to engage the local community. Over 4000 items were pinned, we had champions out engaging people with HistoryPin. The value is really about people coming together in small meaningful ways.

Q&A

Q1) We’ve been discussing today that a lot of us work with Google APIs but don’t communicate with them. I understand that HistoryPin have a more direct relationship

A1) Google gave us some initial seed funding and technical support, everything else is ownd and developed with We Are What We Do.

Q2) Who does uploaded content belong to?

A2) That’s up to the contributors – they select the licence at upload so owneship remains theirs.

Q3) Will HistoryPin Channels be free?

A3) Yes. Everything around HistoryPin will be free to use. We are committed to being not for profit.

Q4) Have you don’t any evaluation on how this works as a community tool/social impact

A4) Yes, there will be a full evaluation of the Reading work on the website in the next few weeks but initial information suggests there have been lasting relationships out of the HistoryPin hub work.

Stuart Macdonald, University of Edinburgh, AddressingHistory

This project came out of a community content strand of a UK Digitisation programme funded by JISC. The project was done in partnership with the National Library of Scotland and with advice from the University of Edinburgh Social History Department and Edinburgh City Council’s Capital Collections. This was initially a 6 month project.

The idea was to create an online crowdsourcing tool which will combine data from historical Scottish Post Office Directories (PODs) with contemporaneous maps. These PODs are the precursors to phone directories/Yellow Pages. They offer fine-grained spatial and temporal view on social, economic and demographic circumstances. They provide residential names, occupations and addresses. They have several sub directories – we deal with the General Directory in our project. There are also some great adverts – some fabulous social history resources.

Phase 1 of this work focused on 3 volumes for Edinburgh (1784-5, 1865, 1905-6) and historic Scottish maps geo referenced by the NLS. W

The tool was built with OpenLayers as web-based mapping client and it allows you to move a map pin on the historical map to correct/add a georeference for entries. Data is held in PostGres database and uses the Google georeferencer to find the location of points on the map.

The tool had to be usable for users of various types – though we mainly aim at local historians etc. We wanted a mechanism to check user generated content such as georeferences, name or address editsannotations. And it was deemed that it would be useful to have the original scanned directory page. Amplification of both tool and API via Social Media channels – blog, Twitter, Flickr etc.

So seeing a screenshot here of the tool you can see the results, the historic map overlay options, the editing options, the link to view the original scanned page and three download options – text, KML,

Phase 2 sought to develop functionality and to build sustainability by broadening geographic and temporal coverage. This phase took place from Feb-Sept 2011. We have been adding new content or Aberdeen, Glasgow, Edinburgh all for 1881 and 1891 – those are census years and that’s no coincidence. But much of phase 2 was concerned with improving the parser and improving performance. Our new parser has a far improved success rate. Additional features added in phase 2: spatial searching via a bounding box; associate map pin with search results; search across multiple addresses; and we are aiding searching by applying Standard Industrial Classifications (SIC) to professions.

We have also recently launched an Augmented Reality access via the Layer phone app. This allows you to compare your current location with AddressingHistory records – people, professions etc – from the past. This is initially launched for Edinburgh but we hope to also launch for Aberdeen and Glasgow as well as other cities as appropriate. You can view the points on a live camera feed, or view a map. Right now you can’t edit the locations yet but we’re looking at how that could be done. You can also search/refine for particular trade categories.

Lessons learned. I mentioned earlier that this sort of project is like GalaxyZoo have 60k galaxies, we only have 500k people in Edinburgh. That means we’ve really begun thinking carefully about what content has interest to our potential “crowd” and the importance of covering multiple geographic locations/cities. In this phase we have been separating the parsing from interface and back end storage – this allows changes to be implemented without effecting the live tool. We’ve been externalising the configuration files – editable XML-based files to accomodate repeated OCR and content inconsistencies, run with the POD parser to refine parsed content. Persing and refining process is almost unending – a realistic balance needed to be struck between what should be done by machine in advance. And we need to continue to consult with others interested in this era, and using the PODs already.

In terms of sustainability the tool is ioenly available. There are some business models we’ve been considering: revenue generation via online donations, subscription models, freemium possibilities, academic advertising. We welcome your suggestions.

Phase 2 goes live very soon.

Success of these projects is about getting traction with the community – continued and extended use by that community. Hopefully adding new content will really help us gain that traction.

James: It’s worth saying that before the project we looked at the usage of the physical PODs – they are amongst the most used resources in the city libraries, this stuff is being used for research purposes which was one of our driving motivations.

Q&A

Q1) Presumably you have genealogists are using this – what feedback have you had?

A1) I think population and having multiple years – to track people through time. We had really good feedback but usage has been modest so far.

Nicola) Genealogists want a particular area at a particular time and that’s when you capture their interest. It’s quite tricky because that’s the one thing they are interested in and all that material is available potentially but you need their engagement to be worth the labour intensive process of adding new directories, but they want their patch before they engage, so there is a balance to be struck there.

And with that we are onto the next session – we are going to grab a coffee etc. and then join a wee breakout session. I’ll report back from their key issues but won’t be live blogging the full discussions.

So…

1. GAP Analysis

  • Use google geo products if you must but beware
  • Think twice about geo referencing
  • There are other geocoding tools
  • There are text parsing tools

2. Mobile futures

  • Do I want to go native or not? Theres a JISC report from EDINA on mobile apps and another set of guidance coming out soon.

Kate Jones, University of Portsmouth, Stepping Into Time

I am a lecturer in Human Geography at Portsmouth but I did my PhD at UCL working on health and GIS. But I’m going to talk today about data on bomb damage in London and how that can be explored and clustered with other data to make a really rich experience.

And I want to talk to you first about users, and the importance of making user friendly mapping experiences as that’s another part of my research.

I’m only two months into this project but it’s already been an interesting winding path. When you start a geography degree you learn “Almost everything that happens, happens somewhere and knowing where something happens is critically important” (Longley et al 2010). So this project is about turning data into something useful, creating information that can be linked to other information and can become knowledge.

For user centred design you start by designing a user story. So we have Megan, a student of history, and Mark, a geography undergraduate, or Matthew, an urban design post-graduate. For each user we can identify the tools they will be familiar with – they will know their own softwares etc. But they all use Google, Bing, Web 2.0 type technology. Many of them have smartphones. Many have social networking accounts. I was really surprised that this generation would be really IT literate – they are fine with Facebook but really quite intimidated by desktop GIS. Important to have appropriate expectations of what knowledge they have and what they want to do. This group also learn best with practical problems to solve, and they love visual materials. And they can find traditional lectures  quite boring.

There are challenges faced by the user:

(1) determining available data – how do we make sure we only do one thing once, rather than replicating effort

(2) understanding the technology, concepts and methods required to process and integrate data

(3) implementing the technical solutions – some solutions are very intimidating if you are not a developer. I used an urban design student on a previous usability project – he downloaded the data from Digimap but couldn’t deal with even opening the data in a GIS, eventually did it in Photoshop which he knew how to use, and hand colouring maps etc.

So we want to link different types of data related to London during the Blitz. It’s aimed at students, researchers and citizen researchers – any non commercial use. We want to develop web and mobile tools so that you can explore and discover where bombs fell and the damage caused – and the sorts of documents and images linked to those locations. For the first time this data will be available in spatially referenced form, allowing new interpretations of the data.

We will be creating digital maps of the bomb census – the National Archive is scanning these and we will make these spatially references. We will add spatial data for different boundaries – street/administrative boundaries etc. And then exploring linkage to spatially referenced images. Creating a web mapping application for a more enriched and real sense of the era.

So what data to use? Well I’m a geographer not a historian but my colleague on this project at the National Archive pulled out all of the appropriate mapping materials, photographs etc. It’s quite overwhelming. We will address this data through two types of maps:

1) Aggregate maps of Nightly Bomb Drops during Blitz

2) Weekly records – there are over 500 maps for region 5 (central london), so we are going to look at the first week of the Blitz and look at 9 maps of region 5.

So here is a map of the bomb locations – each black mark on the map is a bomb – when there are a lot it can be hard to see exactly where the bomb landed. We will be colour coding the maps to show the day of the week the bomb felt and will show whether it’s a parachute or an oil bomb, drawn from other areas of the archive.

The project has six workpackages and the one that continues across the full prokect is understanding and engaging users – if you want to be part of this usability work do let me know.

We have been doing wireframes of the interface using a free tool called Pencil. We will use an HTML prototype with users to see what will work best.

So our expected project outcome is that we will have created georeferenced bomb maps – a digital record of national importance. This data will be shared with the National Archives – reducing the use of the original fragile maps and aid their preservation. We are also opening up the maps so that we remove the specialist skills to prepare and process data – only need to do one thing once. We’ll be sharing the maps through ShareGeo. And there will then be some research out of these maps – opportunities to look at patterns and compare data to social information etc.

Learning points to date will hopefully be useful for other

Before the National Archives I had a different project partner who pulled out of the project as they were not happy with the licenceing arrnagements etc – I’ve blogged suggestions on how to avoid that in the future: http://blitzbomcensusmaps.wordpress.com/2012/02/09/.

Scanning and Digitising Delayes – because lots of JISC projects were requesting jobs from the same archive! But I negotiated 2 scans to use as sample data for all other work, final data can then be slotted in when scanned in June. Something to bear in mind in digitisation projects, especially where more than one project in the same stream with the same archive/partner.

Summary: Linking historic data using the power of location. If you are interested in being part of our user group – please contact me via the blog or as @spatialK8

Natalie Pollecutt and Deborah Leem, Wellcome Library, Putting Medical Officer of Health reports on the map: MOH Reports for London 1848-1972

Nathalie: This is a new project. I heard about a tool called mapalist as a tool, I ended up using Google Fusion Tables – it was free, easy to use, lots of support information, and felt easy to use. I started off by doing a few experiments with the Google Fusion Table. So this first map is showing registered users, then I tried it out with photography requests to the library – tracking orders and invoice payments. So I showed this off around the office and someone suggested our Medical Officer of Health Reports as something that we should try mapping.

These Reports are discreet – 3000 in total – but they are a great historical record. Clicking on a point brings back the place, the subjects, and a link to view the catalogue record – you can order the original from there.

Deborah: The reports are the key source on public health from mid 19th to mid 20th century. They were produced by Medical Officers of Health in each local authority who produced annual reports. Covering outbreaks of disease, sanitation, etc. Lots of ice cream issues at one point in the 19th century – much concern of health of friends due to poor quality ice cream. They vary in length but the longest are around 350 pages.

Nathalie: On our shelves these are very much inaccessible bundles of papers. I wanted to talk more about the tools I was considering. I tried out mapalist.com (addresses); maptal.es (search for a location); mapbox.com (not free); mashupforge.com; targetmap.com; Unlock (EDINA); Recollector (NYPL); Google Maps API; Google Fusion Table API. In the future I will be trying Google Maps API, Google Fusion Tables and also batch geocoding which you can do from the tables.

Deborah: This is our catalogue records. Our steering committees want to search materials geographically so we are trying to enhance our catalogue records for each report that we are digitising – about 7000 for the London collection in scope. We needed to add various fields to allow search by geographic area and coverage date. And what we are trying to think about is the change in administrative boundaries in London. Significant changes in the 19th century and also changes in 1965 to boroughs. Current areas will be applied but we are still working on the best way to handle historic changes so we hope to learn from today on that.

Nathalie: One of the things we’ve begun to realise, especially today, is that catalogue record isn’t the best place for geographic information. Adding fields for geographic information, and to draw this out of other fields, like the 245 title field, is helpful but we need to find a way to do this better, how do we associate multiple place names?

This was very much an experiment for us. But we need to rethink how to geocode the data from library catalogue records – Google will give you just one marker for London even if there are ten records and that’s not what we’d want as cataloguers. We have learnt about mapping our data – and about how to think about catalogue records as something that can be mapped in some way. Upgrade of catalogue records for Medical Officers of Health Reports – very useful for us to do anyway.

Top tips from us:

  • Test a lot, and in small batches, before doing a full output/mapping – makes it easier to make changes. 3000 is too many to test things really, need to trial on smaller batch.
  • Know where you’ll put your map – it was an experiment. I blogged about it but it’s not on the website, it’s a bit hidden. You need to know what to do with it
  • Really get to know your data source before you do anything else! Unless you do that it’s hard to know what to expect.
Deborah: Our future plan is to digitise and make freely available the 7000 MOH reports via the Wellcome Digital Library by early 2013. And we hope to enhance the MOH catalogue records as well.
Nathalie: Initial feedback has been really positive, even though this was a quick, dirty, experiment.
James: There are people here you can tap – looking at Humphrey re: nineteenth century – and we have some tools that might be useful. We can chat offline. This is what we wanted out of today – exchange and new connections.

Stuart Dunn, KCL, Digital Exposure of English Place-Names (DEEP)

I’m going to talk a bit about the DEEP project, funded under the recent JISC Mass Digitisation call. It’s follow on work from a project with our colleagues on this project at EDINA. This is a highly collaborative project between Kings College Lonon, the University of Edinburgh Language Technology Group, EDINA, and the National Place Names Group at Nottingham.

DEEP is about placenames, specifically historic placenames and changes over time. Placenames are dynamic. And the way places are attested also changes to reflect those changes. The etomological and social meaning of placenames really change over time. Placenames are contested, there is real disagreement over what places should be called. They are documented in different ways. There are archival records of all sorts, from Domesday onwards (and before). And they have been researched already. The English Place Names Society has already done this for us – they produced the English Place Name Survey, there are 86 (paper) volumes in total and these are organised by county. There is currently no hard and fast editorial guidelines in how this was produced so the data is very diverse.

There are around 80 years of scholarship, covering 32 English counties, 86 volumes, 6157 elements, 30517 pages, and about 4 million individual place-name forms but noone yet know how many bibliographic references.

Contested interpretations and etymologies – and some obscene names, like “Grope lane”, help show how contested these are. So we are very much building a gazeteer that will connect and relate appropriate placenames.

The work on DEEP is follow up to the CHALICE project which was led by Jo Walsh and was a project between EDINA and the Language Technology Group at Edinburgh. This extracted important places from OCR text and marked them up in xml. We are adopting a similar approach in DEEP. The University of Belfast is to digitise the Place Names Survey, then the OCR text will be parsed, and eventually this data will go into the JISC UNLOCK service.

We have been trying to start this work by refining the xml processing of the OCR. Belfast’s tagging system feeds the parser that helps identify historic variants, etc. The data model does change from volume to volume which is very challenging for processing. In most cases we have Parish level grid references but the survey goes to township, settlement, minor name and field name levels. And we challenges of varying countries, we have administrative terminology variance. So we are putting data into a Metadata Authority Description Service (MADS) so that we don’t impose a model but retain all the relevant information.

Our main output for JISC will be point data for Unlock. Conceptually it will be a little but like GeoNames – we are creating Linked Data so it would be great to have a definitive URI for one place no matter what the variants in name.

Not only is Google problematic but so are the geographic primatives of points, lines and polygons. Pre-OS there is very little data on geographic associations of place-names; points are arbitraru and dependent on scale; administrative geographies change over time; even natural features can mislead – rivers move over time for instance.

We are talking to people like Vision of Britain both to see if we can feed into that site and if we can use that data to check ours. One of the projects I am very interested in is the Pleides project which has digitised the authoritive map for ancient roman and greek history. This is available openly as Linked Data. That’s what I’d like to see happening with our project, which would include varying names, connectings, bibliographic references, and a section of that data model from MADS classification.

Another important aspect here is crowdsourcing. So we will be working with the Nottingham partners in particular will be working with the enthusiastic place names community to look at correcting errors and omissions in the digitisation and the NLP; to validate our output with local knowledge; add geographic data where it is lacking – such as field names; identify crossovers with other data sources. etc. We will be discussing this at our steering group meeting tomorrow.

And finally a plug for a new AHRC project! This is a scoping study under the Connected Communities on crowdsourcing work in this area,

Q&A

Comment: I would be interested to see how you get on with your crowdsourcing – we work on Shetland Place Names with the community and it would be really interesting to know how you cope with the data and what you use.

James: Are you aware of SWOP in Glagow ? You might be interested in the tools they use might be applicable or useful.

Q1) I would be interested in seeing how we can crowdsource place names from historic maps as well – linking to Geo Rereferncer project maps or Old Maps Online – that could be used to encourage the community to look at the records and some sort of crowdsourcing tool around that.

A1) As you know the British Library’s GeoReferencer saw all 700+ maps georeferenced in four days, there is clearly lots of interest there.

Humphrey: We are proposing a longer term project to the EU more in this area. We haven’t been funded for an API, we’ve done much of what has been discussed today but they are not accessible because of what/how we’ve been funded in the past.

And with that we are done for the day! Thank you to all of our wonderful speakers and very engaged attendees.

See Also

Share/Bookmark

Programme Now Available for Geospatial in the Cultural Heritage Domain – Past, Present & Future

We are delighted to announce our final programme for the Geospatial in the Cultural Heritage Domain, Past, Present & Future (#geocult) event which takes place next week, Wednesday 7th March 2012, at the Maughan Library, Kings College London.

A fantastic programme of speakers will explore the use of geospatial data and tools in cultural heritage projects with breakout discussions and unconference sessions providing opportunity for networking and further discussion of this exciting area.

We are delighted to announce that our speakers for the day will include:

Humphrey Southall of the University of Portsmouth will talk about OldMaps online, which just launched today at the Locating the Past (#geopast) event in London.

Stuart Dunn from Kings College London, talking about the new Digital Exposures of English Place Names (DEEP) project which is building a gazeteer that tracks the changing nature of place names.

Chris Fleet of the National Library of Scotland, and co-author of Scotland: Mapping a Nation, will talk about recent developments at the NLS.

Claire Grover of University of Edinburgh will talk about the new Digging Into Data project Trading Consequences which will use data mining techniques to investigate the economic and environment impact of 19th century trading.

Natalie Pollecutt from the Wellcome Library will be talking about their project: Medical Officers of Health (MOH) Reports for London 1848-1972 which is building a free online data set on public health in London.

Michael Charno, Digital Archivist and web developer at the Archaeology Data Service, will talk about Grey Literature and spatial technologies.

Stuart Nicol of the University of Edinburgh will talk about Visualising Urban Geographies, a recent project to create geospatial tools for historians.

Jamie McLauglin from the University of Sheffield will talk about Locating Londons Past, a website which allows you to search digital resources on early modern and eighteenth-century London, and to map the results.

Stuart Macdonald of University of Edinburgh will talk about AddressingHistory, a website and crowdsourcing project to geospatially reference historical post office directory data.

Sam Griffiths of University College London, will talk about “Classifying historical business directory data: issues of translation between geographical and ethnographic contributions to a community PPGIS (Public Participation GIS) project�.

Kate Jones of the University of Portsmouth will talk about Stepping Into Time, a project to bring World War Two bomb damage maps into the real world by using web and mobile mapping technology.

We will also be welcoming Rebekkah Abraham and Michael Daley from We Are What We Do to talk about HistoryPin, a website and mobile app which enables you to browse and add historical images to a map of the world, exploring the past through georeferenced photographs.

The detailed programme for the day can be found on our Eventbrite page where you can also book your free place at this event. Bookings close on Friday 2nd March 2012 so book soon!

We will also be live blogging, tweeting and recording this event so do also keep an eye on the blog here, the #geocult hashtag, and on our Geospatial in the Cultural Heritage Domain – Past, Present & Future page where you will be able to access materials after the event.

Share/Bookmark

Upcoming Event: “Geospatialâ€� in the Cultural Heritage domain, past, present and future

We are very excited to announce that bookings are now open for the next JISC GECO workshop!

“Geospatial” in the  Cultural Heritage domain, past, present and future (#geocult) , taking place on Wednesday 7th March 2012 in London,  will be an opportunity to explore how digitised cultural heritage content can be exploited through geographical approaches and the types of tools and techniques that can be used with geo-referenced/geotagged content.

Issues we are keen to discuss include selection of maps/materials, issues of accuracy and precision, staff and technical requirements, sustainability, licensing.

The event will take place at Maughan Library, Chancery Lane, part of Kings College London. We are most grateful to the lovely people at the KCL Centre for e-Research for securing us this super location.

Library Entrance by Flickr User maccath / Katy Ereira

Library Entrance by Flickr User maccath / Katy Ereira

We are currently confirming the last few speakers and titles for talks so will post something here on the blog once the programme is finalised.

We already have a great draft schedule and some fantastic speakers confirmed so this promises to be a fascinating and stimulating day of talks and breakout sessions.

As we are sharing details of this event at pretty short notice we would be particularly grateful if you could book your place as soon as possible and please do tell your colleagues and friends who may be interested!

Book your free place now via our Eventbrite page:  http://geocult.eventbrite.com/

If you would like to propose any additional talks or ask any questions about the event please email the JISC GECO team via:  edina@ed.ac.uk.


Share/Bookmark

How do you solve a problem like Geo? Highlights from the JISC Geo Event and Discussions

It’s been a few weeks since the JISC Geo Tech & Tools Product Launch event at London so we thought it was time we updated you on some of the follow up activities…

On the second day of the JISC Geospatial Event in London, we had two sessions to gather around tables (and/or move between them) and discuss some questions around the themes emerging from the JISC Geo projects. It followed on from the previous days thoughts “to figure out which products are going to help catalyse the spatial revolution in .AC.UKs”, but this session involved discussion that looked out wider than the presented products.

In session 1 discussions included:

For session 2 themes running through the projects and 6 stages/ways of working with data were identified and discussed.

We would love to hear your thoughts on the discussions – leave your comments on any of the blog posts linked to here or add your comments on any of these topics here. If you were part of these discussions and think an important point didn’t get noted down, do add it as a comment as well.

We will be sharing more materials from the JISC Geo End of Programme events early in 2012 but in the meantime here are some video highlights – also available from our new Podcast stream [Click on subscribe via iTunes] – to enjoy:

JISC Geo Timelapse

View the whole of the first day of the JISC Geo in just 1 minute:  JISC_Geo_Launch_Event_Timelapse

Highlights from the JISC Geo Show & Tell 

Hear about the best projects at the Show & Tell events where all 12 JISC Geo projects showed off their work along with some guest exhibitors: Highlights from the JISC Geo Show & Tell

Share/Bookmark

Space and Time in the Digital Humanities Workshop, hosted by NeDiMAH and JISC LiveBlog

Today we are liveblogging from the Space and Time in the Digital Humanities Workshop, hosted by NeDiMAH and JISC which is taking place in London and follows on (in the same venue) from the JISC Geo meeting earlier in the week. As usual this is a liveblog so all of the same caveats as normal apply about errors and omissions. The hashtag for today is: #spacetimewg

Leif Isaksen (also of the PELAGIOS project) is introducing us to the day by saying that Greenwich is the place where space and particularly time is measured from. If you go out into Greenwich you will see a big laser in the sky and that’s the Greenwich Meridian. And if you look at Ptolomy’s Greek Parallels Intercept you will see that London is also marked there. Ptolomy’s regular grid was the first to start looking at time in

NeDiMAH is a new funded network for Digital Methods in Humanities and Arts from the European Science Foundation with objectives to create a map visualizing the use of digital research across Europe; an ontology of digital research methods; a collaborative interactive online forum for the European community of practioners active in the area. There are also a number of working groups.

Today’s event is arranged by Working Group 1: Space & Time coordinated by Jens Andresen, Shawn Day, Leif Isaksen, Eero Hyvonen, Eetu Makela, There will be four workshops over four years and you can find out more about these on our new website: http://spacetimewg.pbworks.com/.

The format for today will be four sessions on place, period, event, summary. We will have 30 minutes of position papers, then 30 minutes group discussions then 30 minutes of general discussion for each topic.

Panel Session on ‘Places’

What are Places? – Humphrey Southall, University of Portsmouth/Great British Historical GIS Project

My basic position paper is this slide: a table of different kinds of geographical entity and the role of gazeteers.

There’s a certain way in which places coincide with geographic features. In London a lot of our places have names like Royal Standard, Sun in the Sands, etc. So where does that come from? Well initially it’s a pub. What else is it? It’s in some sense a roundabout, a rotary. And again it’s a place, it’s a place on bus timetables. And it’s a conservation area – a names polygon with clearly defined boundary.

My second example is the Nag’s Head in Islington. A pub initially. Now an amusement arcade. But in Wikipedia the place is still there even without the pub. It’s a bus timetable location again. It’s also a town centre area: it’s a bounded polygon.

Of all these types of places the Elephant and Castle is the best known example but there are very many.

So if we look at another example. So we look at a map from the Guardian earlier this year of England’s most deprived areas. These are output areas from research though they are not
Jaywick (in Essex) was found to be the most deprived area in Britain. A discussion broke out in the comments about the second most deprived place, Breckfield. A commenter says that these areas do not exist, he is rebuffed by another who gives evidance: it’s a pub, its a centre, etc. It’s about a whole set of features.

Linda Hill writes about place with the example of Gruinard as a place. I disagree. It’s a series of features but in this example it is not social but geographical features. Historically this is how place is defined. Groome, for instance, describes Gruinard as “a bay, an island, a stream” etc. So we need to differentiate between geographical features and administrative areas or places. If we are interested in history and cultural research it is about administrative units not geographic features.

From Place to Facts – Franco Niccoletti

Lets go back to place. This is a very ambiguous term. It is an “extent in space (in the pure sense of physics)” and/or an abstract concept. We are not so interested in places but we are interested in facts and stories. We are interested in seeing which facts, objects and stories happen in a particular place. Relationships between places, between events, between objects are important to us.

There are some features and challenges of “place”. There is some fuzziness. Sometimes it is difficult to draw the borders of some place. Spatial entities, objects occupying space and located in places are affected by spatial relations – mereology, topology, is the place where an object is located part of it?

The expansion of the concept of place and the concept of appellation to facts is important here. A place X is identified by an appellation a(X) – which gives you an absolute reference, a relative to an overarching system. Or a relative reference, a relatvie to some local system. And most commonly by Place-name. We reason on appellations. We need to relate a(X) with a(Y) etc. to relate facts – what happened in the same place. However appellations have their own issues. They are imprecise. They are time-dependent – the place my change or the name of place may change over time. And they are also space dependent as there may be different appellations for the same place or the same appellations for different places. They are language and culturally biased. So is the fuzziness in appellations or places?

And finally we have Gazeteers and Thesauri. We are all familiar with these. Gazeteers are a list of appellations trying to normalize them, i.e. referring appellations to one another or to some reference appellation system (co-ordinates). But gazeteers do not take into account space, time or cultural variability and does normalising appellations influence our concept of places? Is there any way of dealing with needed extensions?

Place Reference Systems – Simon Scheider

Simon just finished his PhD at the University of Munst where I am based in the Semantic Interoperabiity Lab. We want to make information work together in terms of syntax and semantics. I will talk about an idea I had with Krzysztof Janowicz about place reference systems. I will talk about what reference systems are to explain what a place reference system could be.

Many of you will be aware of semantic technology, ontologies etc. Our opinion is that ontologies are very useful to do this in a certain way as they constrain options. But they have one serious problem which is that they do not account for the problem of reference. In philosophy this is about how to explain what symbols stand for. Reference systems in contrast account for this problem of reference in a practical way. Spatial reference systems do this well. There are formal theories – cartesian coordinates is a formal theory in a certain way. The theory of the primatives in the theory, the coordinates, are fixed in convention. There are certain operations that allow you to find a location from a certain location. There are other reference systems – calendars are temporal reference systems. We can reidentify those points in time using these reference systems. So how do we in general invent these reference systems. And a place reference system is desperately needed.

Places are not the same thing as locations. Every place has a location but they are not identical concepts. This is very clear from the previous speakers…

Comment from Humphrey: But not all places have a location…

Well where is the place of medicine in the Battle of Trafalgar. Is it the location on the battlefield where treatment took place. Or is it, say, the HMS Victory – but that is, at this time, in Portsmouth. It is a tricky issue.

So, the question for us is how do we generate these place reference systems.

There is much more to say on this – I wrote my PhD on this – but there is also a paper I would direct you to here: Place as media of containment by Simon Scheider & Krzysztof Janowicz [PDF can be downloaded here]

We are now breaking into discussion groups on Place.

The groups are reporting back:

Group 1 discussed the ideas of concept based systems of place. And also discussed patterns of movement and how that relates to these concepts. Behind all of that was the issue of addressing the needs of the specific user. It is more about customising the user interface to the user’s specific needs. That was the overall subject we were circling around on.

Comments from speakers:

Simon: on the user centered view. If you look at the issue of ontologies and semantics the user is always important. You should never leave this out of ontology or use of reference systems. One may come up with different ontologies, different reference systems dependent on the use of these systems.

Humphrey: In some ways what was key to my presentation was the idea of places in consciousness and discourse. Consciousness is very individual and not so helpful. Discourse is about sharing with others. So we cannot focus too narrowly on users, you need to focus on communities of use perhaps.

Franco: I may either totally agree or totally disagree with the idea of users! Which users you mean? Users of today? Users of tomorrow? Goals perhaps a better idea: what is the purpose of using this place, what do we want to achieve. Users can prioritise current use unhelpfully. We want to think about intended use, community of use and we can use shorthand of users. If on the contrary we mean lets investigate place and time for archeologists say then I would totally disagree.

Shawn Day: These are some great issues to raise! It’s really important to think these things through.

Laura: Thinking too much about uses can be problematic. Thinking about travelling say. We think about user travelling… if I’m a younger person I want discos and pubs. I’m travelling with children, so maybe I need hotels… these variant needs are important

Group 2: We felt that the context is really important. Context – global vs local for instance is important. Scale is very important. And in terms of users we need to think about the information we want to provide. The problem is how to present that information.

Comments from speakers:

Simon: To address the user issue further. There is a difference between user centeredness and goal centeredness. We all have goals and we can share them of course. We can create ontologies that are widely usable. Goals and objectives can be shared.

Humphrey: The comment of talking about problems and not solutions… I was at a three day workshop in Seattle not unlike today. The problem was that that meeting does not seem to be leading to anything (other than a book). Perhaps a third of the people there agreed that linked data gazeteers were the way forward, the others didn’t know what to do with it. The PELAGIOS workshop do show the way forward in this area. There is work going on that needs some expository stuff.

Eero Hyvonen: I’m a computer scientist and from that perspective I want to give you some use cases. We have ontologies available but what we are looking for from those doing cultural heritage side: what are the problems you want to solve? Like those use cases about travelling etc. When we have those goals, those use cases, we can find issues and use those to find appropriate methods.

Leif: That’s certainly something we can talk about later today.

Group 3 discussed whether or not we are looking for a global solution or some local solutions for a problem. For instance archeological data structures of local grids, local reference systems could be referenced to a universal reference system on a use case or type basis might be better than trying to create everything. So if you have a book perhaps it uses a book system that refers back to a global system – maybe a way to deal with groups of things rather than a universal system.

Simon: That’s a very concrete problem. This is a problem we have in archeology, also in history etc. We have local very hard to understand systems. We need to understand and translate them back to other systems to understand thing. We need to think about what would be in that general reference system to solve this. A solution should also be triggered by practical questions and this is a very good one.

Humphrey: Is this about spatial coordinate systems or something else? If you start from an archeological systems you would think of location as fundamental but names much more fuzzy. If you define things by a name but without a location that can still be quite a solid thing. For instance if you take the example of Camelot. No-one knows where it is but it is a very clear, very concrete entity. And it is a geographic entity. But we do not know the location and in history this type of entity is not unusual.

Franco: Well possibly Camelot is not a historical place but all the same my presentation was about place as where things happen rather than location and Camelot would fit into this system. Space and place are strongly linked ideas but on the other hand using the same framework for very different content leads to very poor information. So for example libraries – general libraries are valuable but specialist libraries are also essential.

Comment: What is most useful gets used the most. Cross referencing systems can be dangerous in some sense. There is a Darwinian element here – what happens with systems that are not as useful.

Simon: Use cases are helpful. You can start doing something, see how it is used and understand the use case that way.

Group 4: We ran a little out of time discussing the issues. One thing that is worthwhile to add: we talked a lot about pragmatic approaches. We discussed that place is much more a social construct than a geographical thing so how do you establish equivelance. And when does it not matter to have equivalence. We also thought about PELAGIOS’ approach – mapping systems against a baseline gazeteer. People can annotate their own data then find connections between data. Solve the common denominator issue that enables you to achieve interlinking.

Humphrey: Again I’m not sure what our collective baseline. In some ways my presentation was a grossly simplified critique of gazeteers. A lot of national digital gazeteer providers are very much thinking about features but time creates all sorts of muddles. Wikipedia and DBPedia have a lot of knowledge in them but they are scary in terms of features. I’m not sure GeoNames are much better.

Shawn: We have three real issues we really need to address and go to. The issue of a variety of case studies is a good idea but how do we deal with the unknown user and be as strong as we want it to be. And how do we deal with abstracting and not abtracting too far. There is a practical way of doing this, of thinking of which questions we want to answer. The Stanford folks for instance have been talking about how to deal with users that need simple tools to work with, who need to quickly understand the issues. People have been grappling with this for centuries so I don’t think we will solve this today but we will have some great discussion.

Panel Session on ‘Periods’

History in Context Ceri Binding, Hypermedia Research Unit, University of Glamorgan

Although we have three distinct sessions today it’s impossible to entirely separate these concepts. Objects connect to events, events connect to places and they connect to periods.

Simple attribute assignments contain a lot of complexity and implicit semantics and lack flexibility. We need to be able to document the statements we are making and the provenance of those statements.
We have been using the CIDOC Conceptual Reference Model to create an event based nmodel rather than just attaching a date to an object.

Periodization lets us subdivide time and cronology lets us order ad understand events. We also want to classify a period in some way – monarchies, style, etc.

In early periodization & chronology we have Erastothees was a 3rd century BC Greek Scohlar who established the first Chronographia of Greek history. Ussher’s Chronology looked at ordering events. His work was rather overshadowed by the fact that he included – as others at this time did – an exact date and time for the “creation” 23rd October 4004 BC, noon. That sort of discredited his work but it can still be useful. If you see a passage from the Common English Bible – Luke 3 (referring to John the Baptist) gives a reference to a date in the rulership of a Roman emperor that we can cross reference with Roman record keeping to anchor events in a particular time.

So when we model periods with CIDOC CRM we do not need to fix the exact time span but we can connect relative timespans. We can assign attributes to an event that helps us explain where this assertion is made – multiple people can make such assertions and give multiple and conflicting definitions for a particular period of time. It’s important to have that multiple vocality in time periods.

We also need to understand period relationships – A is before B and B is before C, say. Putting periods in relative order is more important than having exact dates attached. So for instance we took English Heritage’s SKOS concepts and connected them to CIDOC CRM entities to build up conceptual entities.

We have also created a simple tool for looking at dates and time periods available at http://hypermedia.research.glam.ac.uk/kos/star/time-periods/ which we showed at the first PELAGIOS meeting earlier this year.

[APOLOGIES. Our Wifi connection died here and notes from the second speaker were lost. We will try to reconstruct these later today.]

Glauco Mantegari, University of Milano-Bicocca, Italy

[Notes destroyed by wifi issue so will follow]

Use of Periods in British Museum Documentation – Jonathan Whitson-Cloud

Why would we have a thesaurus? Well the British Museum’s purpose is world peace – the concept was that better understanding leads to equivelance and peace.

We have a more pragmatic set of reasons for needing a thesaurus. We have 1449 terms but not everything fits together oerfectly. We have all the usual partso o thesaurus terms. We try ot to use rlated terms for periods. We only use period/culture field to indicate production period. All of our use is very object orientated. We do not ave associations to other references. We have lots of ways to record uncertainty and fuzzy periods.

We don’t call it period but “material culture”. We really think of it a a cultural label rather than a time or place – we record that elsewhere. This information always interacts with other things. We do not include date but most commonly we add context through material, authority/regnal dates, production/person(s), school, state, ethnic group, find-spot, associated name or place or event. And some departments refuse to use periods at all.

Period is always part of a wider set, it always interacts with other information on the page.

Inference can be an issue – it’s always appealing to fill in as much information as possible. If a definition of a period changes you have to update lots of record. So we allow conflict. We have periods and we have dates and if they clash we allow that, it’s the reality of what we currently know.

What we like about periods is that they are conceptually simple – very good for a lot of our audiences.

We want our thesaurus to speak to others thesauri. We have made our British Museum data available as a SPARQL endpoint (here: http://collection.britishmuseum.org/Sparql/). Thesaurus could be extracted or referred to from that. We inlcude Periods which are freely share upon request and you could embed references (URIs) to other thesurus in BM etc. And we are keen to engage with projects like PELAGIOS.

And we are now discussing those presentations in our groups (and having some lunch).

Group 1 had some questions: how can you tell that the same time period is alike? Use cultural artefacts and cultural opinion – combining three approaches. how about using temporal techniques to define place. we also talked about simplifying models, how you can keep rich complex information but document it pragmatically.

Group 2: it is clear that objects are important. And objects connect to taxonomies and their own systems of organising. Context is crucial, both the object context and the museum context. These are concepts about grouping and differentiating between objects and time concepts. Some data is fairly fixed – archeological layers etc. We also talked about the user dimension. Library catalogues can be useful for users but museum catalogues have not generally been designed for use by users in the same way.

Group 3: Guaco talked about the vagueness of different granularities. We talked about different scales of ontologies etc. this morning. Is there a system to define the granularity of your ontology or your reference system? So that we can make better sense of granularity. And touching on Jonathan’s talk terminology is crucial. Just identifying key terminologies, taxonomies, ontologies in a given domain is critical. You need to be able to plug in some sort of controlled vocabulary or ontology etc. For me a big win for developing thesauri in RDF would be to find ways to cross reference them. We know that there is ambiguity but that’s where the Semantic Web vision does start to help understand those relationships and articulate the various meanings and systems in use. But I’m still struggling with where we connect spatial and temporal together. We can talk about the Romans for instance. We have a period but that’s only relevant to a certain cultural history in England. It means different things to people from other parts of Europe say. We need better spatial boundaries on periods for this sort of reason.

??: We had a litte chat after the panel session and we raised the issue of communality, communal understanding of concepts. And also about facilitating different types of users to make use of it. And on the technological side the claim was that this is far more advanced on spatial issue. But the panel table felt like the spatial community had some way to go on the conceptual level. So, comments on communality…

Jonathan: Communality would be nice but I think we it is more likely that we will share some things, we will define some things differently. And that should be fine. We need to assert useful knowledge. It’s useful to think of ourselves as a community. But we need to be aware of where we specialise both as humans and in terms of data.

Ceri: I think a baseline about a period, about a place. I think that is important to understanding, trusting and using data. An esperanto for our data here. Without that there is less chance of using data beyond what it was originally intended for. Interoperability is the promise of the semantic web but to get out of our silos we need to find those ways to trust and share. In terms of looking at periods and whether two are the same we need assertions and meaning not just labels when we compare these things.

Guaco: I agree with you. Of course the definition of periods and even the more general concept of time is very difficult to agree on. On a practical level and thinking through the practices of practices in some specific domains we do have some basic common assumptions on what period is, on what an event is. Perhaps it’s not formalised as models. But archeologists for example have a long tradition of defining that concept. Of course this new technology makes it possible to take care and represent some possible differences in meanings and to eventually let machines automatically understand different approaches. But I think too that basic agreements on concepts is needed otherwise it becomes very difficult to do anything. Thinking to ontologies, triples, RDF, anything… you need to attribute so these new technologies can help us to show the provenance and attributions for what you are asserting. This is a core concept in semantic web use anyway. If we have completely integrated information we need to know how to connect information, how to understand the reliability of some data versus another.

??: So we have an issue here. We have authority as a way to get communal understanding. Is this an accepted road to communal understanding?

Comment: Wikipedia has given us a new idea of community and authority. Encyclopedia Brittanica was top down but Wikipedia is about the community expanding and improving knowledge from the bottom up.

Guaco: Well Wikipedia as a model is not in contradiction to approaches discussed. Perhaps it is useful to consider periodisation beyond the community of experts. Often they have very specialised view but general users are also interested in this information and provide different views on the same thing. Important to interfaces and how users perceive these and use these terms in actual systems.

Comment: For example on Rotton Tomatoes film review sites they have a top critics rating a general critics rating and a user critics rating. And those might conflict heavily. One might argue that perception of some cultural periods might also share that conflict of opinions

Ceri: Some of those social softwares are very interesting and offer good opportunities. What I want is more than one point of view. Wikipedia doesn’t really solve that – where there is controversy the page represents compromise rather than multiple vocalities but yes, it would be good to have new voices.

Jonathan: There is a kind of race between Google and Wikipedia to deliver this kind of thing. Google are trying to make semantic search engine. They have quality but not quantity but they are working on it. The tools will come. More than one view is great though. Linked data really allows

Comment: You talked about attribution and provenance earlier. The way people use that data requires that provenance especially if the broader community is contributing here. I am interested in how we keep attribution metadata even when mashed up into new services. How do we make sure that attribution is retained in those end points.

Leif: It’s a bit off-topic… I know that attribution is very desirable. But I wonder whether when you take information and create secondary products you need to always attribute – the difference between a coffee table book and scholarly materials is significant in terms of attribution and citations. But you must make it easy to attribute data.

Eero: Google brought, a year ago, freebase, and then they are implementing that in the heart of the search engine and making use of this. One main issue there versus DBPedia. Freebase is created also by the public and selected by editors although volunteers, the main issue is to make it correct, always editors to try and keep it correct. Interesting to note. And another point to note. They do not use triple store in Freebase but a quatro store and that is to account for attribution to help with quality.

Nicola: There is a cultural consideration around attribution. One has to be careful not to use attribution in such a way that it implies credibility to a secondary use of the data. That is an issue in some other scholarly data contexts. If the original source data author is visibly credited it could imply endorsement and that could be create issues around sharing data.

Simon: I’m not sure that authority is the solution to this. It may help keep work to a certain standard but it does not address the heterogeneity of the data.

Panel Session on ‘Events’

Eero is introducing this session by saying that events are even more complicated than places and periods in some ways. We have so many aspects that are interested in the human sense of events. And from an interoperability point of view they are very complex to understand. So we have three different perspectives here.

Once Upon a Time: Space, time and event in modern storytelling – Laura Spinsanti, Spatial Data Infrastructure Unit, Insitute for Environment and Susainability, Joint Research Centre, European Commission

I’m not from the historical domain so I apologise in advance if I say anything silly about the past. So if we take “Once upon a time in a little cottage in the forest there was a little girl names Andrea…” we have a time and a place there. We then read about a dragon the forest – something interesting is going on. This is an event.

We have all sorts of new ways to tell stories. We have microblogging – people write stories of their everyday lives on Twitter. So for instance we have a story, a time and some indication of place with this brief stories.

HC Vent – Here by Dragons. From space to place and back – ancient maps tell a story, about important places (church, castle, coaching inns) and dangerous places (mermaid). GIS describes static reality – now it is perhaps more dynamic but it is a reality far away from what people can use, perhaps apart from scientists. And then we have neo-geography – the usable geographic information to describe reality. In some ways this takes us back to place and the activities we are doing in a place. A map can tell me about hills and rivers on a Leanardo De Vinci map. If I look at a proper modern geographic scientific map I need lots of information and skills to read that. And meanwhile through Twitter, through other social mapping people are creating these neo geographies for themselves.

Looking to time. Time is not subjective – there are clocks everywhere – and yet time is no more beaten by events – we live globally and there are many events at the same time. Time is also our modern obsession. The promise of neo geography is the idea that we can update our sense of place over time.

In a dictionary events are defined as “something that occurs in a certain place during a particular interval of time” but we are talking about the observed world and we are therefore talking about something important when we see something as an event.

And we have various sensing tools – EO sensors, VGI sensing which is social and participatory (and problematic). These sensors create a huge amounts of data – in our VGI project we collect from 6000 to 30,000 tweets per hour, noSQl DB, the cloud, distributed computing. We want to mine that data, we want the context and semantics around this data. And we have to deal with concerns about the importance of the event for the community – when we use data from social media we have a partial snapshot of that community and therefore a partial view of the imporatance and activity around that event.

So, in conclusion…

Is history written by the vistors? Well it is now more participatory, social, and more gender balanced perhaps if only in terms of perspective, from the bottom up.

But there is lots of challenge here. Imprecise, vague and fuzzy methods to use these new data sets. Time varying information needs to become a Standard Time representation; big data/scalability – a new scientific challenge; credibility – authority versus community. People are talking about areas they understand very well so they have and bring lots of their own context

Comment: I have a question… I kept thinking that you would show us the character in your story. why not? In normal narrative terms you would focus on the character.

Laura: I focus on space, time, and event but I could have…

Deducing Event Chronology from Narrative – Oyvind Eide

Holmen/Ore calculations work. We looked at documentation dated 1660 about a church being built, and another dated 1690 about it being built, and one from 1711 an account of the construction taking at least 6 years. Then an account of 1984 saying that a coin from a particular rein was found in the foundations. The idea is to reduce uncertainty. That’s fine for time…

Can we make a similar tool for spatial analysis? It is more complicated to move this into the spatial dimension. If you know something takes place in a larger area and that there are broader bounds for related events there may also be ways to reduce uncertainty. Based on my PhD project where I am modelling verbal and map-based expressions of geographical information I am looking at what is there in a textual description. You have to have leeway on connecting points when you look at, say, point A being a mile south of point B etc. and your true possibility room gets worse and worse as you expand the description. Is it possible to make a geographic dimension like that and actually reduce it down to make sense of the places by seeing that certain possibilities are not possible.

Narratology and how events are described – e.g. Bakhtin and his notion of the Chronotope – can help us understand the temporal and spatial relationships that are artistically expressed in literature. Maybe here we can understand where space time and narrative meet.

Nicola: How do you deal with the fact that the narratives you are comparing may actually be based upon each other – successive accounts building on those before. If you use those to verify each other that will surely be an issue in using this approach?

Oyvind Eide – It is important in the use of the system but I’m not sure about in design of the system. When archeaologists look at sources to evaualate a system so understanding sources here is very important in doing that

Comment: Have you done anything with regional connection calculus.

Simon: There are people working on spatial information and spatial relationships and they are trying to come up with a theory on this.

Laura: You can try to use contact information. If there is a description of a building, perhaps a building cannot be constructed in any place. Perhaps there is a building on a river so that you can exclude some possible locations say.

Oyvind Eide : For various reasons I try not to use pre-existing maps as many problems would be solved to quickly as I want to understand the uncertainty here.

Eero: I think the reasoning stuff is very important here. When we know that counties split and merge but not the population or coverage it is possible to reason with these sorts of approaches.

What is an event? – Ryan Shaw

I am an information scientist working with lots of historians, but a different sort of historians than most of those here – those working on recent twentieth century historians looking at radicalism and civil rights. I guess I see a difference between scheduled events and historical events that are more retropectively defined.

So I think we’ve actually gotten pretty good at modelling space and time and then we abstract these. But when we talk about space and time we actually want to model events and their possible relations. So what is an event shaped like. So according to Wikipedia events are shaped like a box – this is the source information for DBPedia, Freebase etc. So we have this near box with labels, participants, location, dates, etc. This is better than nothing but I think we can do better than this. But events are not necessarily blocks… maybe the Tetris form of events, the slotting together of many events.

Events do not have a specific shape, they shift and they are this mix that Humphrey talks about of conciousness and discourse. Our conciousness divides experience and stories into events that is to some extent culturally independent. But that same psychology is broken up further by language. Watching a movie, playing a game, those also trigger that breaking up of events as if we took part in these experiences.

So how people formulate event models in their mind follow dimensions of time; space; protagonists; causality; intentionality (Zwaan, Langston…) and how you store those events in your mind shifts depending on how you read a narrative. Many of these experiements have been with simplified texts of fairy stories. But looking at more complex events at any point of history you can tell a story at different granular levels. A story in days may not fill in detail of an account of an event by year. But there is a relationship between levels – you can tell the story of the 17th century can be told in centuries or in decades say.

There is an interesting relationship with place here. If we were planning a trip to the west coast we might say that we should visit San Francisco. But your itinerary between two different trips to the west coast may not coincide at all. So a Flickr map of San Francisco shows that tourists take totally different images to residents – there are two San Franciscos. In fact there are thousands. In events the same is true – there are thousands of different Arab Springs. There are a number of ways that each story constructs the same events differently.

So another example here is the Neighborhood project – Matt Chisholm & Ross Cohen (http://hood.theory.org/). Some stories have common paths and key tracks or recommended routes around an event occurs. So we can see a clear block like identity but that is built up over time rather than being inherantly true. Historians often have clear information about when things happen but they are often interested in disrupting that clear path of blocks. For example if you see a review of Blood Lands by Timothy Snyder there is consideration of how one can see a more broad view of Europe between Hitler and Stalin. When we build those structured paths and chronologies in our infrastructure of teaching history then we abstract those events.

What we are striving for is models of events where we can abstract between different granularities of an event. Through a shared level. Then a more nuanced view of a concensus pattern (e.g. the British WWII, the Japanese WWII). Then at the next level there is the individual narrative to compare different events. And what is interesting is how we can extract shared labels from these individual narratives.

Question: To what extent are you asbstracting the spatial out of your definition of events?

Ryan: I think that you can think about those different levels. If at the shared label level of World War II it is near impossible to make that terribly spatial. But at the individual narrative level that is going to be much more specific in terms of place. So there is a trade off between richly modeling events by location and protagonists, and abstracting away to the level of labels etc.

Simon: I think the granularity is the issue in terms of space as well as times. So is your approach a practical solution to model events?

Ryan: I can make it a little more concrete for the use case I am interested in, the history of the civil rights movement. So I have accounts of the movement. You can see the evolution over time of the scholarly accoun tof the movement and you can see how different sort of individuals record the movement. I am keen to identify local events and then aggregate ways for sharing models, shared events, etc.

And we are now going on to discuss this panel over coffee…

OK we have returned refreshed… the final portion of the day is:

Open Forum Discussion on Space Time – Lief Isaksen is chairing this

The hope is to identify common themes. To ask about anything important we’ve missed. And to discuss how NeDiMAH takes forward ontology here.

So…

Methods and Technologies and Infrastructure are what we want to think about first. Both current best practice and current tools. And also what are the things that we need or could improve?

Methods – Current

Georeference is a good way to look at place. The Gazeteer is a footprint and is based on spatial reference but there needs to be an independent place reference system. That’s a theoretical issue.

There are a lot of things going on in building digital gazeteer. But these tend to be topographic mapping or they are crowdsourced but for use in historical projects. But there is a large potential for retro conversions of scholarly work like the Survey of English Place names, spatial authority lists etc. But those are expensive to do. But a good gazeteer is a big gazeteer but getting up to big properly backgrounded content is expensive and difficult. We need to consider that size isn’t everything. And we need to retroconvert historical and scholarly materials.

There are issues around clarity and IPR issues.

Need a vocabulary to link places to other places. We need other techniques not just gazeteers here. Place ontology.

We also need an alignnment of KOS.

We have gazeteers but there is more to ontology than gazeteers. We need a better formal theory and ontology of place.

Un-GIS – we have these concepts of locations and places that we can work with away from GIS.

We need a temporal GIS – a system that allows me to work with temporal boundaries or events as they change a place over time.
Response: there are some systems for these but they are not commercial and they do not deal with fuzziness or conceptual aspects. Secondo.
Comment back: I’m talking about something that does let me handle those concepts, those granularities.

If we are talking about software we use PostGres and PostGIS and you can use that for all that sort of data. It’s not a GIS but a relational database. A GIS is not the way to represent this stuff.
Response: you need something visual to work with that.

I would recommend looking at existing gaming and modelling technologies. Gaming engines perhaps useful here.

High quality metadata from mapping agencies, and for those to exist across borders. We need vector maps and vector quality. Especially for Scandinavia. And crucial from moving from name to place on a map.

Finnish land survey publishing everything open source btw.

There was talk of PostGres and PostGIS and you can in fact build visual elements on that through interfaces so a clear set of demands or requirements are the key part of making those technologies work here.

Validation of existing crowd sourced materials could be used to move towards a place of having reliable data that builds on that existing material.

Geo-parsing needs are very specific for historical materials and needs improvement. There is such an important element of context in parsing historical materials since places change over time and there is not only the need to create specific parsers for specific materials but also to have a way to understand the context of how those geoparsers handle a given placename at the time of that material’s creation.

Is there a need for temporal parsers as well?

We have frame semantic parsers.

We also have Freebase, dbpedia, geonames, pleiades etc. available to make use of.

We need to develop or improve parsers.

We need event parsers.

We need improved event gazetters

We need good event ontology

We do have CIDOC CRM (-EH)

The Linked Library Data (W3C) resources have some real usefulness as well.

Also SKOS.

And… if anything else occurs to you do email or contact or comment in the direction of NiDeMAH.

NiDeMAH is trying to think about formulating an ontology of methods. They did some work on this a few years ago on Digital Humanities but DH has expanded significantly since then.

I’ve been finding it useful to think about a cookbook approach – here is a method, here’s what it is intended to use, here are related methods, here’s an example, etc. And perhaps a way to see if that method is good or if you should look elsewhere. Not giving full information but full pointers for intelligent tourists can find out where to find out more.

Shawn: It’s a sensible process but it is a long term process. We’ve identified so many different approaches here. In the perfect world I’d love to send everyone home with homework. We are such a diverse group here and we may think that we are one group here but are we even speaking the same language? We have to see how people do use these things so we can find what that one big pain that needs resolving, what we actually need and what we actually mean by that term. We use ambiguous terms. Even if we collected one from each person here with narrative and process about why that is the big pain that would get us a good step down the line. That would be a great way to start to move the process forward.

Simon: So should everyone provide a use case?

Shawn: Yes, that would be fantastic. We will go through all the materials we are recording here today but if people can go into that bit more detail and elicit what those needs are that would be fantastic. If we can task that out to people that that would be fantastic. Even if it is just one use case each.

Comment: I think that it would be important to provide some form to fill in to help us to provide you with those use cases in a consistent way.

Comment: The cookbook approach sounds good. Each year the barriers to entry for this stuff get higher so I’m hoping for something more – step by step guides.

Leif: I am concerned about mission creep. About making things to vague to be useful. But we could look to provide a “You Will Need…” list to help explain the sorts of resources one will need to have on hand.

So… what will be happening next in this project? Well we will be writing our report for the end of January 2012. We will create some sort of wiki or forum to encourage people to contribute and comment. But we may ask you to target your knowledge and we certainly encourage you to engage in that process as much as you want.

We will also be planning our next workshop and will be in touch about that.

Finally this is our very first workshop so do you have any feedback about this event. Good or bad.

Comment: I really enjoyed all of the talks but now I know what you want to achieve it might have been useful to step away from the theoretical and look more at the pragmatic issues, the way in which issues are currently being addressed etc.

Yes, I understand that. You don’t want to be too pragmatic but we definitely take that on board.

Comment: There is always the issue of 100 flowers type thing. Balance needed between structured and unstructured?

Comment: Today wasn’t very structured and for a first event that feels right, perhaps the next one might be more structured.

Comment: It seems that there is no agreement on the spatial issue, theoretically time and space is the same thing. Can we be clearer on what we are talking about? Is there a case for making event centic say? What’s the balance between finite and infinite here.

I think this is a contentious space to be honest but maybe… is there concensus here. Perhaps there were issues in how we described the event today.

Comment: It’s terribly difficult to come up with a conception of space and time at this sort of workshop.

Comment: I agree… but if we don’t… who will? So for example if we treat them as two entities we treat them differently from treating them as one thing.

Comment: I think the way to broach that is to air the problems and find areas of commonality and shared issues and working on concepts to solve those problems. I don’t think that we want to chisel up the concept.

Do people feel it’s helped them personally in thinking through these issues and awareness raising today?

In terms of our report we will share and communicate our report and workshops will be on other issues of space and time – GIS, web mapping etc.

Eero: the next workshop will probably be in Hamburg at the DH conference. We can put out the theme list in call for papers and if we have proceedings of that workshop then there are already useful resources likely to come out of that.

Would you be interested in timelines, chronologies etc…

Comment: Well this is old news for me…

Shawn: But actually this is a disciplinary issue. So many new digital humanities people are entering this space and are new to this and we need to be able to give them some different expressions of these sorts of issues and ways into these areas.

How about GIS/Webmapping?

Comment: I don’t know, it would not be of interest to myself.

Comment: Generally about visualisation I probably struggle with that. I’d like to see a wide range of approaches. Specifically as it applies to space and time.

We do have another working group in this area so we don’t want to tread on toes… we need to balance what we do with the work the other groups are doing.

We will be putting out calls for papers, and communities will be brought together and you’ll hear about that as it moves forward.

Finally…

Tomorrow is the Pelagios2 Hackfest (we’ll be liveblogging this). The idea is to explore open resources that are available related to history, culture, heritage using geography as a point of reference. Pelagios2 is based around the ancient world but actually the day is broader than that. We’ll have tech specialist and domain specialist and we’ll be coming up with quick wins and pain points in interlinking open heritage resources with geospatial concepts. We hope to find out what is easy, what is valuable and what can’t we do and why.

And with that we are closing the day with a giant thank you to all of the speakers, organisers and those recording the day.

Share/Bookmark

JISC Show & Tell, Timelapse and Awards Results

This post has now been updated to reflect the results of the JISC Geo Awards.

These timelapses show the morning and afternoon  Show & Tell session at the JISC Geo Programme meeting:

Morning Session Timelapse

Click here to view the embedded video.

Afternoon Show & Tell Session

Click here to view the embedded video.

Following the Show & Tell session the JISC Geo Programme Awards were presented to projects in the JISC Geo programme based on their achievements to date and, in the case of the Best Project award, the votes of the community collected at the end of the Show & Tell Session.

Project Blog Post (Single Entry) of the Year was awarded to xEvents (#xevents). The nominees were: GEMMA (#gemmaProject); IIGLU (#jiscG3); NatureLocator (#naturelocator); xEvents (#xevents).

Project Blog (Overall) of the Year was awarded to IIGLU (#jiscg3). The nominees were: GeoSciTeach (#GeoSciTeach); IIGLU (#jiscg3); NatureLocator (#naturelocator); PELAGIOS (#pelagios).

Project Manager of the Year was awarded to Amir Pourabdollah, ELOGeo (#elogeo). The nominees were:
Amir Pourabdollah, ELOGeo (#elogeo); Chris Higgins, IGIBS (#iGibs); Stuart Macdonald, STEEV (#STEEV); Elton Barker, PELAGIOS (#pelagios).

Hybrid Project Manager/Developer of the Year was awarded to Nick Malleson, GeoCrimeData (#geoCrimeData).

Project Developer of the Year was awarded to the GEMMA Team (#gemmaProject). The nominees were:
GEMMA (#gemmaProject); Halogen 2 (#halogen2); PELAGIOS (#pelagios).

Project of the Year was awarded by the JISC Geo Community, via voting at the JISC Geo Programme meeting, to NatureLocator (#naturelocator). The nominees included all 12 of the JISC Geo projects.

If you are one of our fabulous project winners please do feel free to post a copy of your certificate on your own blog. You can download them from the JISC Geo Flickr Set.

Share/Bookmark

JISC Geo Programme Meeting – Day Two

Today we are in day two of the JISC Geo Programme Meeting and we are liveblogging as appropriate – so any spelling issues etc. will be corrected as soon as possible – please do comment on content etc. below.

David Flanders of JISC is introducing the day: The aim of today is to identify recommendations for the future.

There will be three sessions which run as presentation and then break out groups around a theme. Each table will have a scribe. The goal will be to discuss potential recommendations for how JISC should advance spatial. That recommendation will then be written up by the project manager and the scribe for a given group and posted on that project’s blog and they can then be looked at further, a soft of ad-hoc community consultation. We will run this process three times today.

Training Non-GIS Experts in the Use of Geospatial Tools and Technologies at Stanford University - Patricia Carbajales


I will be talking about the way that we support our community around geospatial tools and our approach. In terms of putting this into context we started in the 1970s seeing GIS used mainly by developers but we have now reached the point where an increasingly broad group of users use GIS technology, in 2011 we have general users engaging with these tools. And there has been a real evolution in GIS. We have moved from being the provider of map data, we are abkle to provide tools that assist decision makers. We are moving to a place where there are hundreds or thousands of geospatial data users who really don’t care that much about the quality of the data and we have to give them the basic technology to understand and use the tools and data. And we are also looking at how those results impact our environment and our society.

GIS in Higher Education enhances educational goals – that’s a really important message to get across.

We have a center for excellence in GIS and we want that to be a space where users can help and support each other. These can be really interdisciplinary user groups. It is important to have the faculty on board. No one department has full ownership. Everything is a communal good in this space. And we think that GIS benefits from being very unique and at the same time very diverse. Students are from diverse course backgrounds but we have to always be aware to be able to offer examples from their discipline or specialism. We have to make our support relevant to them and we try to create more of a learning environment than a traditional teaching environment.

Our keys to success start with ensuring students have a really sound basic understanding of the concepts and basic priciples. We need to teach basic mappiing know how. We need as experts supporting those students and faculty member we have to have suitable examples to hand from their field.

In terms of the principle causes for failure. We need to be aware about how we plan, manage and keep our support user/customer focused. We have to offer comprehensive, simple and flexible support.

The form our support takes is through class support – we work with classes where students have to learn ARCGIS in one week and do this through homework around that work so that classes can focus on using those tools. We also undertake project management for those class projects – to help find data and help do the analysis. Allowing the professor to focus on the application itself.

We also provide instruction, consultation, data resource center and support center for all members of the university.

We then also collaborate, provide a daa resouce center and offer technical support for the specialist spatial history lav, digital humanities lab, etc.

And finally we undertake outreach work with the wider community.

At Branner Library we provide a center where students and staff can come in and get one hour of intensive one on one support.

So, we try to encourage “thinking with maps”. We focus on GIS education to raise awareness to stimulate interest and provide a sound foundation. And we provide a learning environment rather than a teaching environment. We see this as a sort of pyramid of engagement that begins with awareness and peaks with higher level modeling applications. At each level of detail fewer users need to gain these skills but all have a route to reaching this high level understanding of geospatial.

For the higher level skills we hold workshops and these are hands on and take place as needed by students – we don’t make students wait for a full session to run if they need that support ow. We follow on with consultation on a one to one basis. We tailor to cover most frequent needs. We also push students to practice between sessions – I always tell the students that like tennis there is no point showing up for a session every few months if you have not practiced in between. We also gather feedback

The workshops are always hands on. In one workshop they find out how to make a map from the beginning to the end.

Right now we use ARCGIS, it’s waht the US market demands and it allows deep analysis of the data. We have a campus wide licence and we get free support from them around that. But we also use Google Earth and maps because it is easy and familiar to students and that works well for collaborative use of geo or publishing of data

We would like to offer other more specialized skills for spatial analysis but only where demand is demonstrated. Tools like PostGIS are too niche to need regular workshops to be run at present.

Our main objective is to establish a geospatial foundation for our learners. We have limited resources and some very specialised groups. Faculty;s involvement is critical but often that is not easy but it does provide enforcement of fundamentals. The human resource of workshops is so important. If you take an online course then ask students to come back they rarely will. If you are there while they take a course and can then bring you questions it makes a big difference. The majority of students like that human interaction very much.

Increasingly we are thinking that expanding our support for programming languages such as Python would really help us with what we do

Q&A

Q1) Are your training materials available for others to use?

A1) Our materials are public and available for others to use, especially the Google training. In most of these workshops there are training materials a well as tutorials around that. We do demos every 15 minutes in those sessions, they are very interactive and all join in. We take it very slowly to encourage them to understand what they are doing throughout. We teach undergraduates and postgraduates in the same way.

Q2) Do you do any tracking of students that go to the workshops?

A2) We have one to one consultations a week after the workshops. But three hours is a huge investment for the students and the hour of one to one consulation is hugely helpful to students so they usually come back again.

Q3) We just produced a Python for ARCGIS course so you’d be welcome to use that from our website!

A3) Thank you, we need that!

And now onto our next presentation:

Mapping the Republic of Letters - Nicole Coleman


This is an in-depth look at one geospatial project. This is different as we really don’t use GIS in this project but geospatial is increadibly important to this project. This is particularly inspired from the historical mapping in the period of the letters we are looking at (1500-1870).

We take inspiration from early maps in the way in which the maps themselves are a reflection of the perspective at that moment in time. This feels relavent to how we visualise and how we map the correspondance we are looking at. In fact we created a timeline and map for the intellectual property around our project. We are trying to think differently about space and time for this material.

One of the things we have been doing is to try and establish visual ways to browse and explore the data to enable scholars to find suitable materials, to navigate, to understand the choices they are making for a visualisation. The original materials are always linked back to their original archive copy so you can explore the historical resources you are visualising and know where they have come from.

I’m just going to walk you through a few case studies to illustrate the challenges we have,

So, looking at the Athanasius Kircher, we only have letters sent to him, not those he sent. Paula Finland was the lead on this project and she was keen to look in more detail at the nature of the letters. We have used Fineo, a multidimensional content viewer that allows you to look at the locations, the languages and, because it is relevant to this histoical figure, the faith of the correspondant to understand their work.

British and Irish Travellers in Italy – students went through this dictionary of travellers. These are really detailed entries of arrivals and departures of travel although not all are consistently detailed. What has been interesting about looking at the areas recoded is that sicily is treated as a city – a peculiarity of his archive.


We can take this data and look at who was in a city at the same time. You can look at particular periods of stay or particular individuals. You can also connect to data on the individuals involved with information on that person’s age at the time of that stay, etc.

So looking at the temporal context was really helpful but did not give us the complete picture but we also wanted to look at relational information. So we have a kind of a network graph tool. So in this visualisation a dot indicates a person, blue lines indicate a very loosely defined relationship.

These tools are really exciting for exploring this sort of data were connections are just not that apparent but can be discovered and explored through these sorts of tool.

Voltaire’s correspondance is our largest data set. He was very very prolific. We should note here that the tool we have developed uses contemporary country boundaries because there are not good shape files available for the geography at the period although for our research it is actually more important to look at cities really. We have also tried to indicate on the timeline shown with the map where letters are available but are not mapped. This is really important as it tells you how representative the visualisation is of the data.

Looking again at a map with the tool Inquiry, a map of source locations for letters written by Voltaire and we can see most letters do not include the source locations.

Putting letters on the map draws our attention to these materials in a different way. So for instance this letter from Panama becomes very visible. When you look at this letter the content may not be so exciting. This is another way of understanding the data we have and which materials are and are not significant. And indicate trends or unexpected patterns in letter sending – for instance few letters are exchanged with those in Spain and indeed looking at a letter exchange to Madrid this turns out to be to a non-Spanish correspondent staying in Madrid.

Benjamin Franklin, Caroline Winter is working on this project, and we’ve been looking at a comparison of exchanges of Voltaire and Benjamin Franklin. They had common correspondents but do not appear to have corresponded with each other. But you can see various second degree connections between Franklin and Voltaire. So we take this data out of a spatial context and out of a temporal context for this specific network diagram. We balance this sort of network relational graphing with spatial temporal contact visualisations.

We can look at Benjamin Franklin’s network at the time of his stay in London. And in this case we compare with the network of David Hume. Looking at how that experience connected him to the Scottish Enlightenment (hence Hume used here).

We are now moving to breakout groups so blogging will pause for now.

So, we are back after a most excellent lunch and discussion session!

Presentation of an ‘Emerging Geospatial Innovation Themes Map for UK Universties and Colleges’ by Gregory Marler (Programme Evidence Gatherer for the JISCgeo Programme), No More Grapes Ltd.


Gregory is next up to present. I’ve been reading the various project blogs. Geospatial is obviously frequently mentioned in your posts but also data is a huge theme as is usability. I tried to group all the projects according to how they are using data and who they are using it with.

There were some interesting posts on people and geospatial data. So JISC G3/IIGLU did some usability testing on Potlatch, one of the editing tools for OpenStreetMap and that was hugely useful to the OpenStreetMap community who made some changes to the interface as a result.

There was also discussion of teaching without calling it teaching – things like GEMMA that shows users through example and practice so that users can get their hands on the data. NatureLocator also encourages users to learn and explore more.

It’s important to keep telling people about your projects – forums, blogs, tv (if you can, meetings, word of mouth, even emails are important. You’ve probably collected email details from people to try the Beta – have you told them again that they can try it now the bugs are fixed! Remember to update your potential users with any key changes you make. And you need to make sure that you flag up key information to all your audiences – techies want information on the end service as well!

Some of the blog posts were long but most were nice and short and readable. Images are particularly powerful – particularly screen captures of emerging products. There was real variety of scope, some about the technology – and great sharing of experience there – some about the research. The key message here was learn and have fun – it’s been huge fun reading the blog posts!

And I will finish on a joke from a project that posted a whole page of lightbulb jokes!

“How many green building consultants does it take to change a lightbulb?”

“None, we were all at a conference!”

And so for a wee while we will discuss project blogging a wee bit.

Q&A

Comment) Our lab blogged an awful lot. But JISC work we didn’t blog much. We were so busy doing the work we didn’t have much time to blog. We thought we would do a lot of blogging but actually we didn’t want to give too much away so we were fairly quiet.

David) But you were coding hard. I feel like blogging frees you up to share as you go, when it’s useful, rather then writing a big final report. And the average was 17 blog posts which is equivelent to

Comment) We found blogging really useful as we were a consortium as it was a way to track progress to have a reason to share expertise and chase project partners. And work got done quickly and efficiently and we had lots of interest in

Comment) In terms of blogging the final report is a write only document. I’m not sure they are ever read. Blogs are read. You pay people to write stuff so the fact that they are actually read by potential collaborators and the community.

David) So how many of you looked at the analytics? Or didn’t?

Comment) I didn’t want to put pressure on myself, I just wanted to get started, to make links to other work etc.

Comment) I’ve not blogged before but have written lots of formal reports. It’s a really different way of communicating. If you can explain the concepts to a really novice user then you really have to understand your work. Thinking about that can really help you rethink what you are doing and throws up challenges for yourself. There can be real snobbery about these things, you had to be very formal in language. I don’t think it matters, you need to communicate the ideas across. I really enjoyed the different sort of writing we’ve seen this.

David) So how has this gone in terms of convincing your organisations about blogging? I know some institutions require really strict reporting?

Comment) We have rigerous internal processes for project management and the team struggled with doing something additional. We did improve a bit I think but we all struggled with being informal in that way, you worry about pressing publish. It is useful to see something a different way – and lighter more sensible way is nice. Trying a different methodology helps but it’s a start.

Comment) A concern and interest I have is about publication. Humanists tend to write journal articles – there is a paper there on the blog that just needs to be pulled together and allows me to reuse and publish all that work we’ve done.

Dave) There is a real mixture on those blogs: serious research work, light and silly content, project management, technical discussion. We had Greg there as a recent graduate to be a reader for these blogs – to give an outside view on what was working well, who was enjoying the blogs.

And now we are having our next breakout discussion, this one focussing on geospatial data and the needs for creation, management, repurposing, expressing, analyzing and sharing of geospatial data.

And after that lively chat we move to the last presentation.

Presentation on ‘The Myth that is Project Sustainability’ and ‘Future Strategic Funding Areas for JISC’ by David F. Flanders (JISC Programme Manager) and Matthew Dovey (Programme Director, Digital Infrastructure (e-Research))

Obviously any discussion of funding is subject to change. You need to speak to programme managers and there is some advice and guidance we can give as those that regularly read bids. Please don’t take my comments as gospel.

How many of you want to continue your project? And is it sustainable to continue?

And what would you want to sustain? Is it a product? the next big Facebook maybe?

Is it about skills? Those bespoke services at ac.uk can translate into income from student fees

Staff? Attracting and maintaining staff is important to think about in sustaining.

Community? Change how we do things? Ironically this is the most expensive of the options but we do have a great community here, it would be super to see it continue to thrive.

My advice to you is that if you want your project to continue to innovate then you need to continue to bid. Bid bid bid. It’s not fun, it may not be perfect but it works, it produces products in an increadible way. In you moving forward you are going to bid for more things. But where will you bid? A lot of our projects are moving from a research project that is very innovative into something that can be taken forward. Maybe a product, maybe skills, maybe something else that can be taken forward. Which of these things can be taken forward.

I really do believe that spatial should be across all of our activities. I am going to try to show you some future plans of the JISC teams to get an idea of where spatial might fit into that bigger picture.

So, here’s the big pucture. We have a top level strategy, we innovate, and we take some of those into services. That balance between innovation and service can be tricky. If you are interested in creating a service it’s not as nice a space as an innovation space. Business managers, legal teams, etc. come into that. There is much more there than just the product and the vision may be very different from your original innovation.

In recent years our budget has been a pretty good split between innovations and services. You should really go for that innovations chunk of the pie chart.

So we have an overview of the people of JISC and that is important if you are looking across the full spectrum of JISC activities. Under innovation there are four teams: learning; user and organisations; content; infrastructure – huge potential across all of these for geospatial. And the key names here are Tish, Craig and Catherine. If you have a project that applies to these strategic thinkers then contact them, email them, call them, ask them about upcoming funding. A little bit of effort can really help you in feeding into these programmes, to hearing about the opportunities.

My boss at EDINA is Rachel Bruce who leads the infrastructure team. We are the largest team in innovation. It’s not a bad idea to know about our team going forward. In addition to Rachel she has two directors working with her inluding Matthew Dovey who is here to walk us through some of the new diagrams and branding we are currently thinking about:

In terms of infrastructure as a whole we have three broad areas: Information and Library Infrastructure, Research Management, and Research Infrastructure – about doing that research.

Digital Directions is a diagram that shows elements that underpin these themes includes geospatial, authentication etc.

If we look at library and information systems we have areas there around emerging opportunities, resource discovery, and curation and preservation. In Research Infrastructure we have research information managerment and research data management – the support available to the researcher. What are the tools that research teams need? How do we feed recognition for teams into things like the REF etc. And in Rsearch management we have research support, research tools and repositories and curation shared infrastructure – the ways in which data can be reused, preserved for the future etc. on a technical and social level.

The key thing about geospatial is that it features in all of the areas here. So when do we keep this integrated into wider programmes and when do we fund geospatial as a specialist area. And on that back to David.

So that was a very whistle stop tour of a very varied portfolio. The main message is please do bid. And here is my bidding advice:

  • Contact the programme manager by phone/skype and tell them about your idea to make sure it is in scope to meet the strategic objectives. It exponentially improves your odds.
  • Add a use case and image/diagram on the first page of your bid. Most reviewers read 5 to 10 bids at a time so they have to be readable.
  • Repeat back whats written in the call – you really need to make sure your bid clearly indicates how your idea meets the call and why.
  • Less is more. Five pages with diagrams is great
  • Focus on what you are going to do, rather than why it is important
  • Say which human is going to do what – the more specific you can be, the better. It helps people understand the intimacy of the bid.
  • Clear budget – explain why the numbers are as they are. A Paragraph with percentages is really helpful… x% will go into development, x% to dissemination etc. That’s really important for markers.

And an addition from Matthew: the focus moving forward has to move from a geospatial led activity to an application led activity. Think about these things as a researcher led proposal that answers a real problem. Embedding those tools is essential. Think about sustainability. Bidding for more funding is a sustainability model but that is questioned at a certain point. Our funding is finite so are there are other revenue streams. Can you commercialise? Can you charge poeple outside of UK Academia but keep it free to HE? Can you get some cost recovery from your host institution? Just have a think about those elements.

Please do take advantage of Matthew’s time this afternoon with any questions.

So with that here’s the next few days lined up… tomorrow we have the Space-Time workshop and also the Review panel going on in parallel. Then on Thursday we have PELAGIOS2 – an open hackday. And in parallel we have the Geospatial Service Review session.

One last reminder. We want comments on how we can improve what we do. So do fill in our survey!

And finally I have run 8 programmes over the years and this has been one of my favourites. Your work has impressed me immensely!

 

 

Share/Bookmark