MOOCs in Cultural Heritage Education

This afternoon I will be liveblogging the MOOCs in Cultural Heritage Education event, being held at the Scottish National Gallery of Modern Art in Edinburgh.

As this is a liveblog please excuse any typos and do let me know if you spot any errors or if there are links or additional information that should be included. 

Our programme for today is:

Welcome and Intro – Christopher Ganley (ARTIST ROOMS, National Galleries of Scotland and Tate)

Image of Christopher Ganley (National Galleries of Scotland) Christopher is the learning and digital manager for the National Galleries of Scotland and Tate. In case people here don’t know about Artist Rooms, this is a collection that came to Tate and NGS in 2008. Around 1100 items of art from Anthony d’Offay with the National Heritage Memorial Fund, the Art Fund, the British and Scottish Governments. The remit was to be shared across the UK to engage new audiences, particularly young people. The collection has grown to around 1500 items now – Louise Bourgeois is one of the latest additions. The Artist Rooms Research Partnership is a collaboration between the universities of Glasgow, Edinburgh and Newcastle with Tate and NGS led by the University of Edinburgh. And today’s event is funded by the Royal Society of Edinburgh and has been arranged by the University of Edinburgh School of Education as part of the outreach strand of their research.

Year of the MOOC?: what do Massive Open Online Courses have to offer the cultural heritage sector? – Sian Bayne, Jen Ross (University of Edinburgh)

Sian is beginning. Jen and I are going to situate the programme today. Jen and I are part of the School of Education working in Digital Education, and we are ourselves MOOC survivors!

Image of Sian Bayne (University of Edinburgh)We are going to talk about MOOCs in a higher education context, and our research there, and then talk about what that might mean for museums and the cultural heritage context. Jen will talk about the eLearning and Digital Culture MOOC and expand that out into discussing cultural heritage context.

So, what do we know about MOOCs? It’s a bit of a primer here:

  • Massive: numbers. Largest we ran at Edinburgh had 100k students enrolled
  • Open: no “entrance” requirements.
  • Online: completely.
  • Course: structured, cohort-based. And we don’t talk about that so much but they have a pedagogy, they have a structure, and that distinguishes them from other open education tools.

In terms of where MOOCs are run we have EdX – they have no cultural heritage partners yet. We have Coursera and they do have cultural heritage partners including MOMA. And FutureLearn who have cultural heritage partners yet (but not who are running courses yet).

The upsides of MOOCs is that they have massive reach, a really open field, high profile, massive energy, new partnerships. But on the downsides there are high risks, there are unproven teaching methods – and the pedagogy is still developing for this 1 teacher, 20k students kind of model, and there is a bit of  a MOOC “backlash” as the offer begins to settle into mainstream after a lot of hype.

In terms of cultural heritage there isn;t a lot out there, and only on Coursera. American Museum of Natural History, MOMA, California Institute of the Arts and the new Artist Rooms MOOCs are there. Some interesting courses but it’s still early days, not many cultural heritage MOOCs out there.

So in terms of the UK Jen and I have just completed some research for the HEA on MOOC adoption. One aspect was which disciplines are represented in UK MOOCs. We are seeing a number of humanities and education MOOCs. FutureLearn have the most of these, then Coursera and then there are cMOOCs in various locations. In terms of the University of Edinburgh we launched our first MOOCs – 6 of them across 3 colleges – last January and were the first UK university to do so. This year we have 7 more in development, we have 600k enrollments across all of our MOOCs and sign ups for the Warhol MOOC is well past 10k already.

So why did we get involved? Well we have a strong and growing culture of digital education,. It was an obvious for us to take that step. There was a good strategic fit for our university and we felt it was something we should be doing, engaging in this exciting new pedagogical space. Certainly money wasn’t the motivator here.

MOOCs have been around for a while, and there is still some things to learn in terms of who takes them, who finishes them etc. And we’ve done some research on our courses. Here the Philosophy MOOC saw over 98k students but even our smallest MOOC – equine nutrition- saw a comparable number of registrations to our total on campus student body (of approx 30k). Of the 309k who enrolled about 29% of initially active learners “completed” with a range of 7 – 59% across the six courses. We think that’s pretty good considering that only about a third of those who signed up actually accessed the course – of course it’s easy to sign up for these and hard to find time to do them so we aren’t worried about that. The range of completion is interesting though. We had 200 countries represented in the MOOC sign ups. And age wise the demographic was dominated by 25-39 year olds. And we found most people who took the MOOCs, at least in the first round, mostly had a postgraduate degree already. They were the people interested in taking the MOOCs. And now over to Jen…

Image of Jen Ross (University of Edinburgh)Jen. I want to tell you about the experience that lecturers and tutors had on the eLearning and Digital Cultures MOOC that took place last January. Firstly I wanted to talk about the xMOOC and the cMOOC. the xMOOC is the highly structured, quite linear, institutional MOOCs – the Coursera or FutureLearn model. Some peer interaction, but as a side benefit of the content as the main thing. Teacher presence in these sorts of MOOCs tends to be very high profile – the rock star tutor concept. You won’t meet them but you’ll see them on video. A lot. The other sort is the cMOOC, the connected MOOC. these were thought of by Canadians in 2012/13 before MOOCs became built. Around the theory of connected environments, participants create the course together, very loosely structured, very collaborative, very focused on participant contributions. Not about the rock star professors. This difference has been quite a big press thing, xMOOCs have had a bashing, people suggesting they are “elearning from 1998 minus the login button”. But actually what Sian and I have been finding is that in ANY MOOC we see much more than these two different forms. Our own MOOC is really neither an xMOOC or a cMOOC but had a lot of other content.

So our MOOC, #EDCMOOC, was based upon a module of the MSc in Digital Education module that generally has about 12-16 participants, and instead trying these ideas about the self in online environment in a MOOC format, at huge scale. So we decided rather than doing week by week lecture heavy format, we would do something different. Instead we did a “film festival” – clips for participants to watch and talk about. Then some readings on theory of digital education. And questions to discuss. We asked students to create public facing blogs which we linked to, we also used the built in discussion spaces. And instead of weekly tests etc. we had a single peer assessed “digital artefact” final assignment.

We gathered all blogs, which they had registered with us, in one place – so you could see any post tagged with #EDCMOOC. And we had a live hangout (via Google+ / YouTube) at the end of every few weeks – and we would pick up on discussions, questions that were coming up in those discussions and coming in live. The students themselves (42k of them) created a Facebook Group, a G+ group, used the hashtag but also these additional groups meant there was so much material being produced, so much discussion and activity beyond a scale anyone could keep up with. A hugely hectic space for five weeks, with everyone trying as best they could to keep an eye on their corner of the web.

Bonnie Stewart described our MOOC as “subverting it’s own conditions of existence”. And it was a chance to rethink that xMOOC/cMOOC divide. But also what the teacher is in a MOOC. What it means pedagogically to be in a MOOC. There are interesting generative questions that have come out of this experience.

So, I want to show you some examples of materials participants made on the MOOC. Students shared these on Padlet walls. We also had an image competition halfway through the MOOC. e.g. “All Lines are Open” by Mullu Lumbreras – the Tokyo underground map re-imagined with many “You are here” markers – emphasizing the noisiness of the MOOC! There were many reflective and reflexive posts about students trying to get to grips with the MOOC itself, as well as the content. There was such a variety of artefacts submitted here! There were images, videos, all sorts of assignments including super critical artefacts, such as Chris Jobling’s “In a MOOC no-one hears you leave” – although interestingly we did. There was also a chatbot assignment – allowing you to talk to “an EDCMOOC participant” and used comments from chats and from the course to give back comments, really interesting comment on the nature of the MOOC and the online environment. We also had a science fiction story all created in Second Life. This must have taken such a lot of time. We have found this on the MSc in Digital Education as well that when you give people the opportunity to create non textual assignments and contributions they give such creative and take such a lot of time over their multimodal work.

We also had  – a nod for Artist Rooms colleagues – a Ruschagram tool as an assignment. And indeed people used their own experience or expertise to bring their own take to the MOOC. Artists created art, scientists drew on their own background. Amy Burbel – an artist who does lots of these online videos but this one was all about the EDCMOOC.

Image of Jen Ross and Sian BayneSo I’d like to finish with some ideas and questions here for discussion… Elizabeth Merritt from the Centre for the Future of Museums asks about MOOCs in terms of impact. Rolin Moe talks about MOOCs as public engagement on a different scale. Erin Branham asks about reach – why wouldn’t you run a MOOC even if only 20k people finish. We have comments on that actually… David Greenfield emphasises the innovation aspect, they are still new, we are still learning and there is no one single way that MOOCs are being used. There is still a lot of space for innovation and new ideas.

Q&A

Q1) I work at the Tate in visual arts, the idea of assessment by multiple choice is very appealing so I wanted to ask about peer assessment. How did that work? Did there need to be moderation?
A1 – Jen) It is quite controversial, that’s partly as the MOOC platform don’t handle peer assessment too well. We didn’t get asked too much to remark assignments. Peer assessment can work extremely well if the group know each other or share a common understanding.

A1 – Sian) It was strange how assessment focused many people were for a non credit bearing course though, they wanted to know how to pass the MOOC.

Q2) I wanted to ask about the drop out which looked absolutely huge…

A2 – Sian) You mean people who didn’t begin to engage with the MOOC? It is problematic… there has been a lot of criticism around drop outs. But we have been looking at them from a traditional education point of view. MOOCs are free, they come in, they sample, they leave. It’s about shifting our understanding of what MOOCs are for.

Q2) What did you learn from that…?

A2) I think it would be too hasty to make too many conclusions about that drop off because of what it means to be in a MOOC

A2 – Jen) there is some interesting research on intentions at sign up. Around 60% of people signing up do not intend to complete the MOOC. I don’t think we will ever get 90% retention like we do on our online MSc. But Sian’s point here holds. Different demographics are interested for different reasons. Retention on the smaller equine science MOOC was much more about the participant interest rather than the content or pedagogy etc. The 7% retention rate was the more innovative assessment project.

Q3) We would love to have that data on drop outs. We aren’t allowed to fail at that rate in public. I work in the National Library of Scotland and we know that there is “library anxiety”.  I would hate to think this is a group with inflated library anxiety!

A3) Absolutely and I know there will be more on this later on. But its about expectation setting within the organisation.

Q3) Just getting that data though – especially the research on those who don’t want to complete – would be so valuable for managing and understanding that completion in open contexts.

Q4) Perhaps the count should be from the first session, not from those who sign up. It’s not the original email we are concerned with but the regular drop out which would be more concerning. We get people doing this with on site free experiences. This is more about engaging with the higher up decision makers and marketing about how we could use MOOCs in cultural heritage.

A4 – Sian) It was unfortunate that many of the MOOCs really marketed sign up rates, and inflated expectations from that, as a way to promote the MOOCs early on. Very unhelpful to have messages like “we want this one to hit a million sign ups!”

Q5) These aren’t credit bearing but are there MOOCs which are, how do they work?

A5 – Jen) Quite new territory. Some allow you to have some sort of credit at the end of the MOOC on payment of a fee. And some – including University of Central Lancashire – are trialling MOOC credit counting for something. Work at European level there too. But no one has cracked the magic bullet.

A5 – Sian) Two offering credit so far – one at Oxford Brookes, one at Edge Hill.

Q5) Maybe credit will appeal to those currently absent from the demographic profile – moving to those with few or no higher level qualifications

A5 – Sian) we did ask people about why they did the MOOC, many for fun, some for professional reasons. none for credit.

Q6) what are the indirect benefits of the programme?

A6 – Sian) We have had five or six people enrolling on the MSc as a direct result of the MOOC. We also got great publicity for being at the forefront of digital education which is great for the University. That indirect benefit won’t last of course as MOOCs get more mainstream but

A7 – Sian) 40 days academic staff time to develop, 40 days to deliver it. And that doesn’t include the Information Services staff time to set up the technology, In terms of participants I’m not sure we have that data

A7 – Jen) We kind of have it but it’s taking a long time to analyze it. You get a lot of data from the MOOCs. There is a whole field of learning analytics. We have the data from both runs of the MOOC but it’s hard to find the best way to do that.

Q7) Interesting, for people reflecting on their own time investment

A7) We gave guide time of 5-6 hours per week for the basic involvement but actually many people spent a lot of time on it. And there was a lot of content so it took that long to read and engage with it for many participants.

Q8) How do you assess 40k people?

A8 – Sian) Well that’s why we spent a lot of time trying to make the assessment criteria clear for people marking each other.

Q9) Can you say a bit more about xMOOCs and cMOOCs. A lot seem to be xMOOCs?

A9) There is a lot of discussion around how to go beyond the bounds of the xMOOC.

A9 – Sian) Our MOOC was seen as quite innovative as we were a bit of a hybrid, but a lot of that was about participants using social media and just having a hashtag made a difference.

Q9) So are there people trying to move out of the platform…

A9 – Jen) for the credit and microcredit courses you try to bring students into the MOOC platform as that is easier to measure. And that’s an area that is really becoming more prominent…

A9 – Sian) Would be sad is the move towards learning analytics took away the social media interactions in MOOCs.

A9 – Jen) We do see AI MOOCs where there is some opportunity to tailor content which is interesting…

Comment) Can see these working well for CPD.

:: Update: Jen and Sian’s Prezi can be viewed online here ::

The changing landscape of teaching online: a MoMA perspective – Deborah Howes (Museum of Modern Art)

It is a pleasure for me to tell you just a little bit about what has been going on at MOMA, especially having to spoken to just a few of you – I realise you are very savvy digital education, cultural education audience.

I like to start with this slide when I talk about online learning at MOMA – of MoMA education broadcasts in the 1950s. We have always been interested in technology. It is part of our mission statement to educate (the world) about the art of our time. This image is from the 1950s when MoMA had an advanced idea of how to teach art and creativity – and they invited TV crews in from Rockafeller Centre to record some of what was going on in terms of that education.

So online learning for MoMA can be as something as simple as an Online Google Hang Out working with seniors who go on a field trip once a month without them having to leave their apartment – they have a museum visit and discussing the art. Some have mobility issues, some have learning disabilities. But they have these amazing opportunities to visit and engage all the time for free. We use Google Hangouts a lot and this is an example that really hits home.

Image of Deb Howes (MoMA)

This example, like much of what I’ll talk about today, isn’t strictly a MOOC but it’s from that same open online concept and the MOOC is changing. However we have, at MoMA been running online courses since 2010. These are NOT MOOCs as we charge for them. You can take them in two ways. You can be self led and there is no teacher responding to you and there are no students but you go at your pace whenever you want. Or you can do the teacher led version with a teacher, with fellow students, with responses to your comments. We started the concept of starting these courses. We did this with Faith Harris, who now works at Khan Academy, and she was teaching online in the New York Museum of Fashion. She had a clear idea of what the format was – a structured course led by an educator. We did a studio course – how to paint – to see if that would work. That seemed such an usual idea at the time but they are really popular, especially as an instructor led experience. They like to see and share progression and to get feedback on that. Just like a real studio experience. So the “how to” videos, one of the things we tried to replicate online was the feel of exclusivity you have in an on-site course. If you enrol in person you get to paint in our studio then you get access to the galleries when no-one else is around. So here we have Corey Dogstein and he’s also an artist, the students love him, but you can see this video of how to paint like Jackson Pollock and really get into that free form, jazz playing vibe.

My previous role I came from a gallery where I had no idea who was doing my tour, or what they were getting from it, then I was in an academic place where I knew who everyone was, how they were progressing, assessing them etc. So in this role the online teaching experience has been really interesting. In particular taking out the temporarility and those barriers to speak up, you open up the accessibility to a much much wider audience. The range of learning difficulties that students come in with and feel able to participate online, that wouldn’t feel able to participate as fully in person is striking.

We use a course management system called Haiku. No matter what you do it looks like a bad high school newspaper. It organises content top to bottom, welcome messages, etc. 60% of our students to the MoMA online course have never taken an online course before. They tell us they’d rather try it with us! We have a lot of first timers so we have to provide a lot of help and support. We try to make them engaging and lively. The upside of the highly controlled space is that the teachers themselves are making these courses, it’s easy for them to change things, that’s the upside.

We try to think thematically about content, rather than thinking academically along a timeline say. So colour as a way to explore modern art came to mind, and also broadens the base beyond painting and sculpture – design and architecture for instance. So this way we can interview the curator of design, Paula Antonelli, on colour in design. [we are watching a clip of this]. Talk about exclusivity! Even on my 11 o’clock tour I couldn’t get you time with Paula. The students really respond to this. And we also created videos of the preservation techniques around colour.

This course: “Catalysts: Artists creating with sound, video and time” brings all those ideas together, and is a hybrid xMOOC and cMOOC although I only just realised this! We got the author Randall Packer to put this history together using artefacts and resources from MOOCs. It’s so hard to do this history – why read a book on the history of video artworks?! As an educator how many museums have the space to show a whole range of video art? Even at the new Tate underground you have a rotating collection. Rare to have an ongoing historical way to explore these. One of the reasons MoMA was able to jump into online courses feet first, is that Volkswagen are a corporate sponsor of the galleries and were keenly supportive. And as part of teaching the Catalyst course Randall, who is also a practicing artist, thought it would be great if we could get students to make and share work, wouldn’t it be great to make a WordPress blog they could use to share these and comment on each other. And my colleague Jonathan Epstein suggested digital badges – they get a MoMA badge on their blog and badges for LinkedIn profiles etc.

So, over three and half years we’ve registed about 2500 students. Small versus MOOCs but huge for us. Around 30% of enrolees are not from the US and that 30% represents over 60 countries. For us it was about engaging people in a sustained way with people who couldn’t come to MoMA or couldn’t come often to MoMA, and we really think we’ve proved these. This is one of those pause moments for us… so, any questions…

Q&A

Q1) That quote on your slide “the combination of compelling lectures with the online galery tours and the interaction with the other students from around the world was really enlightening and provocative” – what do you learn from these participants?

A1) We do find students who set up ongoing Facebook groups for instance, and they are really active for a long time, they will go on a trip and write to their peers about what they’ve seen. We learn whilst they take the course, but also over time. What is so hard for museums to learn is what the long term impact of a museum visit… there is no way to know what happens months or years later, or when they are at another gallery… But you get a sense of that on the Facebook groups.

Image of Deb Howes (MoMA)

Q2) At the moment it’s $25 to come into MoMA. How much are the courses?

A2) It is. But it’s a sliding scale of prices. For self-led courses… 5 weeks is $99 if you are a member. or $150 for a non member (of the museum) 10 week course. For instructor led it’s $150 to $350 per course depending on time etc. They may fluctuate, probably go down. I like the idea of a cost recovery model. Free is hard for me as instructor. But there is a lot of free stuff, and especially in the MOOC world, they are comparing what’s available, what the brand is worth, which is worth doing.

Q3) Member?

A3) Of the museum. Typically at the museum you get lots of discounts, free entry etc. as part of that. I think it’s about $75 for an individual membership right now and that’s part of a wider financial ecosystem I don’t get into too much.

So… we have all these courses… We got contacted by Coursera who said “oh sorry we can’t take your courses as you don’t award degrees” but here is a sandbox for K-12 for you. In fact MoMA does a huge amount for teachers. We had just done a huge new site called MoMA Learning with resources for all sorts of classes. So we thought, well this will be our textbook essentially. If we leave it there we don’t need to renogiate all the content again. So we decided to do a four week “art and inquiry” MOOC. There is a huge focus in the core curriculum on discussions around primary source materials, we do a lot of training of teachers but we can’t fit enough of them in our building. We have taught a class for teachers around the country, perhaps beyond, who come for a week in the summer and talk about inquiry based learning. It just so happened when this came together that we were the first MOOC in the primary and secondary education sandbox – I think that has everything to do with why we had 17k ish participants. We had a “huge” engagement ratio according to Coursera, they told us we were off the charts – people are watching the videos “all the way to the end!”. Huge validation for us, but if you think carefully about all the ways people are learning that satisfy them, people look for something to engage with – and museum educators are great at this, great at finding different ways to explain the same thing.

At the end of the course we had a survey. 60% were teachers. The rest were taking the course for different reasons – doctors wanting to talk about x-ray results better with patients. 90% of all those who answered the survey had not been to MoMA or had an online MoMA experience but they did visit the website or site afterwards. We had more friends, we had people following and engaging with our social media. It was a wonderful way to have people access and engage with MoMA who might now have thought to before.

So I have a diagram of MOOC students. It is kind of Ying-Yang. The paid for courses tend to be my age or older, highly educated, have been to many international galleries. Coursera they are 20-30 year olds, it’s about their career, they take lots of Coursera courses. And what struck us was that putting our content beyond the virtual museum walls, people really want to engage with it. In the museum we want people coming to us, to speak to us, but here they don’t visit us at all but they still want to engage.

We had 1500 students get a certificate of completion. In MoMA we have 3 million admissions per year. I have no idea how many take that information with them. For me as a museum professional 17k people made an effort to learn something about MoMA, word is out, and I taught 1500 teachers in the way I would like to in an academic way, and I taught more than I could teach over three years, but in one single summer. And the success of that means we have followed up with another MOOC – Art and Activity: Interactive Strategies for Engaging with Art. The first one runs again soon, this new course runs from July.

There are a few other things we do online… MoMA Teens Online Course Pilot. This was a free 5 week course in art appreciation at MoMA. These were teens that had taken probably all our teen courses as part of after school programmes. They brought back to us this Real World MoMA episode. [very very funny and well full of art in-jokes].

You get the idea right? I should just let the teens do all the videos! We have a new group of teens coming in doing a completely different thing. This is their medium, they understand. They combine the popular with the collection in an unforgettable way, the kids will never forget these five artists they focused on.

I just want to go through some pedagogical background here. There is a huge body of really interesting reseach on how the brain works, what makes memories… One of the things I always try to think about is what makes your brain remember, and why a museum is such a great way to learn. So one thing that is that you learn when something new comes in – a new sight, a new sound, a new smell… Museums are like that. They are new experiences. For children they may never have been to a museum or even to the city before. I try to make the online courses take that into consideration. How can we do that, and make the brain hold on to what it being learnt?

I don’t know if Howard Gardner is familiar to you? His ideas that different brains work differently, and that we need to present material in different ways for different people. We have hands on aspects. We have scientist experts, we have critics… we try to present a range of ways into the material.

So here also is some student feedback – the idea that there is more in the course than can be absorbed but that that is a good thing. We also try to ensure there are peer to peer aspects – to enable sharing and discussion. So here we have the learning communities from that studio course – where participants share their art… increadible learning experiences and incredible learning communities can exist beyond the museum and beyond the university but it is great to be there to support those communities – to answer questions, share a link etc.

I wrote a post you might like: moma.org/blog search for “how to make online courses for museums”

Moving forward we have a couple of hundred videos on YouTube but we were asked if we would put these into Khan Academy. We filtered the best down, gave them embed codes, and they have created a structure around that. As a museum you don’t have to do everything here, but reusing is powerful.

And moving forward we are doing some collaborations with the University of Melbourne.

And my forcast for Museum-University Partnerships forecase? Sunny with a chance of rain! There are real challenges around contracts, ownership etc. but we can get to a place of all sunny all the time.

Q1) We would be developing online learning as a new thing. When you decided to go down the online route did you stop anything else? Did you restructure time? How does that fit with curator duties?

A1) We didn’t drop anything. The Volkswagen sponsorship allowed us to build the team from myself and an intern to include another individual. But it’s a huge time commitment. Curators don’t have the time to teach but they are happy to talk to camera and are generally very good at it. I was at John Hopkins, and previously to that at the Metropolitan Museum… I was used to having media equipment to hand. There wasn’t that at MoMA but we created a small studio which makes it easy for curators to pop in and contribute.

Q2) Could you say a bit about the difference of practical versus appreciation type class?
A2) for practical classes the key is *really* good videos. Being able to replay those videos, if shot well, is really helpful and clears up questions. It lets them feel comfortable without asking the teacher over and over again. If you’ve ever been in a group critique that can be really intimidating… turns out that the level of distance of photographing your work, post online, and discuss online… students feel much better about that. There is distance they can take. They can throw things at the wall at home as they get critiqued! It is popular and now online you find a lot of low price and free how to courses. But our students who return it’s about the visits to the gallery, the history of the gallery, connecting the thinking and the artwork to the technique

Q2) So unspoken assumptions of supplies available?

A2) No, we give them a supply list. We tell them how to set up a studio in their own bedroom etc. We don’t make assumptions there.

Beyond the Object: MOOCs and Art History – Glyn Davis (University of Edinburgh)

Our final speaker is one of the “rock star lecturers” Jen mentioned!

So, in comparison to the other speakers here the course I have been preparing has not yet run. We have just under 12000 signed up so far, we anticipate around 20k mark. I am an academic and I teach film studies, particularly experimental cinema. A lot of the films I talk about it can be hugely hard for people to get hold of. That presents massive difficulties for me as a researcher, as a writer, but also for these sorts of learning experiences.

Where I want to start is to talk about Andy Warhol. A book, Warhol in Ten Takes, edited by myself and Gary Needham at Nottingham Trent University. We start with an introduction about seeing a piece called “does Warhol make you cry?” at MoMA – and he was at the time. So many rights to negotiate. That book is solely about Andy Warhol’s cinematic work, focusing on 10 films in detail. Those that are newly available from the archive, those where there was something new to be said. He only made films for five years – making 650 movies in that time. A lot even in comparison to Roger Corman (5 a year or so). Some are a few minutes long, some many hours. The enormous challenge was that in 1972 Warhol took all of his films out of circulation – he wanted to focus on painting, he was getting sued a lot by collaborators who wanted money from them. And they remained that way. Just before his death he said “my movies are more interesting to talk about than they are to watch”. He may have been joking but that sense has hung around studies of his work. Take a film like “Empire” (1964) it’s a conceptual piece – 8 hours and, in terms of content, time passes and it gets dark – has been little shown. Very few of his films are in circulation. MoMA has around 40 circulation copies available but that’s a rare place you can see them, you can see screenings at the Celeste Bartos screening rooms. The only other place to see them is at the Warhol museum in Pittsburgh on VHS. If not that its 16mm. You can’t pause or rewatch. It’s cold. It’s really hard to do Warhol research… so many pirate copies also out there…

12751417225_a7c167590f_z

So are his films worth seeing or are they just conceptual pieces? Since the films have started to come out of the archives films like Empire have been shown in their entirity… people then discuss the experience of sitting through all of them. Indeed in his PhD thesis (Motion(less) Pictures: The Cinema of Stasis), Justin Remeselnik suggests they are “furniture films” – you can admire and engage with them but not to be paid attention to for an increadibly long time… and yet in Pamela Lee’s book Chronophobia talks about seeing Empire the whole way through, as a phenomenological record of pain it’s fairly incredible. She’s not alone here… another writer, Mark Leach, asked an audience to provide live tweeting during a screening of Empire, and then compiled these into the book #Empirefilm.

This is a long diversion but… Gary Needham and I tried to think hard about the experience of the Factory and the working environment there, what was it like to see Warhol’s films in the context of other experimental filmmakers in the 1960s. In trying to put together a MOOC these ideas sat with me, as the rights negotiations for the book took place over 18 months. We had 30 new images created – we had to apply for grants to get these made, rather than reproduced – by the Warhol museum. We had materials from BFI. We were able to use publicity materials as well. And we had to get agreements from so many people. The Whitney Museum has a Warhol Film Project and acted as our fact checker. It’s a 500k word book so that took some time. One of Warhol’s assistants, Gerard Malanga, allowed us to use his diary entries in the book. I came to Warhol knowing the rights access issues. And I came to the MOOC knowing those issues, knowing the possible time lag…

Chris provided a great introduction to Artist Rooms earlier. I head up the Art and it’s Histories strand. Sian and Jen head up the education strand but I work with artist historians and theorists doing research projects around the materials. So making a MOOC was an idea we thought about as a way to bring out Warhol to a wider audience, and to highlight the Artist Rooms content. I had a lot of questions though and I knew we could not use moving images at all. Could we talk about Warhol’s work without images or clips? What does that mean? Can we assume that people taking the course might source or be able to watch those things. I’ve been teaching Warhol for 15-20 years. I can show all manner of images and clips to students for teaching which are fine to use in that context but which would be impossible to use online for copyright and provenance reasons.

So, there are roughly 250 Warhol pieces in the Artist Rooms collections. There are particular strengths there. There are a great number of posters, as Anthony d’Offay said to me, these give a great overview of events during his lifestyle. There are also stitched photographs – another strength – and these are from the end of Warhol’s career. There are not many so to have a number to compare to each other is great. There are also early illustrations and commercial works. And there are self portraits from the early to mid 80′s. So for me how do I put together a course on Andy Warhol based on this collection? His most famous work is all from about 1962 to 1966. These pieces are silk screens of Monroe, Electric chairs, guns, Campbells soup cans. They are hugely expensive and not in the collection. But are these so familiar that I can assume those taking the course will know them. But the other partners in Artist Rooms – from the National Galleries of Scotland and the Tate – that did cover some of this famous 1960s material, to sex up the course a bit!

So this let us take shape. This will be a five week course. Each week will be a video lecture from me (sex, death, celebrity, money, time) and then a video interview who have worked with Warhol’s work in one way or another – curators, academics, conservators etc. Who could give a fresh perspective on Warhol and what he means to them. I’ll come back to them shortly.

I’ve talked about Warhol’s ubiquity and that’s been an issue as we finalised materials, looked at editing videos. Warhol is one of the most well known artists in the world. His images circulate so widely on such a range of objects (maybe only exceeded by the Mona Lisa) that familiarity with them is high. You can buy just about everything – from mugs to skateboards… the Warhol story is extraordinary. What’s really interesting for anyone teaching art history or theory is that he provides a really interesting test case with regards to reproduction and distribution.

For instance the Marilyn Diptych ( Andy Warhol, 1962). This was based on a publicity still for the 1953 film Niagara which he cropped to his liking. He started to make works just after her suicide in 1962. They have been described as work in mourning. And they are important examples of pop art, collapsing the worlds of art and pop culture. But also commenting on the mass media reproduction of imagery. The uneven application across this piece suggest the blurring of images in newspapers, and the important difference between similar reproductions. Thomas Crow (in his essay for Art in America (May 1987), “Saturday Disasters: Trace and Reference in Early Warhol”) writes that Marilyn disappears quickly when you look at this work, what becomes clearer is the blurrings, the paint level variations. But I have been using this image to teach with Walter Benjamin’s essay on mass production in relation to art work. His essential argument is that endless reproduction, owning of facsimiles etc. changes our relation to the original. It could seem less valuable… or more valuable… as we have seen with Warhol’s work. And Warhol’s own work is a reproduction itself of course. And his painting is the valuable thing… not the press still…

Being able to talk about this work and reproduction through the MOOC and the digital format adds another layer. MOOCs raise the question of what the use of gallery visits may be. What’s the difference of talking about a work and engaging with the original piece. The process of art or art history has always involved travel to galleries, biennials, festivals. Writing about it means seeing the work, there are financial angles there, there are green angles there. For example I am going to Newcastle for three days to see “Crude Oil” (Wang Bang, 2008). It is a 14 hour movie, you can only see it in installation. I intend to move in… my husband thinks I’m mad!

And what about the experience of engaging with the stuff here. I spent three days at the Warhol Museum in Pittsburgh preparing for the MOOC watching to VHS, speaking to staff, and also looking at Warhol’s “time capsules” – receipts, ephemera, e.g. a box from 1978 is just “Concorde stuff”. I was accompanied by a curator, they opened boxes for me… some smelled bad due to moldy stuff, exploded soup cans, a still-inflated silly birthday cake which was a present from Yoko Ono. They are treated as art works. They are still cataloguing these things. So I spoke to the curators about how they are making the time capsules educationally engaging. They have video of celebrities going through them, for instance John Waters gives a great critique of one of the time capsules. They did a live opening, streamed to the ICA, of one of the time capsules. I mention these because these were really interesting examples of opening this type of content and artist up to others.

Let me just say a bit about how we have made the videos for the MOOC. My colleague Lucy Kendra who had filmed other MOOC content saw this filming experience as unusually immediate and intimate in form. We spoke to curators and conservators at the galleries, Gary at Nottingham, and Anthony d’Offay himself. We were also given access behind the scenes at the Tate Store – they took out 10 pieces as a backdrop which was so valuable. We had interviews of an hour, an hour and a half. We have so much materials. For the Warhol class there will be a required 10 minute version of the video, but we will then give a longer, possible unexpurgated, videos for those that want to see them the whole way through. These are fantastic and extraordinary videos. I think they are fantastic representations of these institutions but I think it may open the doors to careers in some of these roles. We hope they may open doors in ways other art education courses may not do.

These interviews I could not have forseen, but they have become the bedrock of the course, the USP, the main draw, and these first time perspectives on the artist and his career. Why Warhol is still of interest and the personal interests of the interviewees themselves. We started by thinking the issue would be about content and rights but the interviews have gone beyond the object there.

Image of Glyn Davis (University of Edinburgh)Q&A

Q1) Will there be assessment at the end? Will they be assessed by peers.

A1) Yes, I think there has to be for Coursera. I have PhD student Teaching Assistants. I have left some of those decisions to them. They have suggested allowing practical responses to the materials – to get a sense of materials and present day materials, contemporary approach. Or a short written text, a 2-300 word response to a work of their choosing – perhaps from Artist Rooms or perhaps another. These are great TAs though with ideas like building a map of the nearest Andy Warhol to the participant, opening up possible discussion of access. Peers will assess the work and this is where drawing on the expertise of colleagues who have run MOOCs before is so valuable.

Q2) When we did our MOOC we had an easier rights time but we really wanted to use films that it was hard to find legal clips to… we avoided anything we knew was of dubious origins. But we found students sharing those clips and images anyway! What do you plan to do with that?

A2) As far as I know the Warhol Museum in Pittsburgh are well aware that material leaks out… if our participants link to those things we can’t help that. We just create that distance and leave that in the students hands.

Comment – Debs) I feel your pain entirely! In addition to the academic excellence issue, at MoMA part of our job is about preserving the identity of the work, of the artists in our collections. We can’t distribute unofficial copies of works by artists in our collection, it wouldn’t look good. And yet… we were one of the first museums to go to Electronic Arts Intermix about using video online. They’d never really been approached to digitalise their works in that sort of context. The first person I spoke to was extremely pessimistic about these once-cutting edge technology using artists works being able to share these works online. We were able to say that in the environment of this course – a limited course, not a MOOC, we have a lot of details on them – it is very comparible to the classroom. We stream it and although you probably could capture the content but most won’t. They were OK with this. We got Bill Viola, Yoko Ono, etc. allowing us to stream the content. It was costly… but I hope as we push these boundaries more the artists and rights holders will go with that. Otherwise we will have a loss to art history and accessing this hard to reach art. That arguement of the most famous work being the most visible already is one I’ve used before, I hope that rings true.

Q3) Do you have specific goals – educational or a specific combination of enrolees – for this MOOC?

A3) There are two or three key goals. Part was a partnership between the university, the Tate and National Galleries. And part of that was about trying a MOOC as a way to do that. It might be that the Tate or National Galleries want to use one of those interviews somewhere else too. For me it is also about trying a new tool, and what is possible with that. I am interested in testing the boundaries of what Coursera will do.

Q4) With the MOOCs which you have completed… with hindsight now is there a lot that you would do differently?

A4 – Deb) Not a lot but… with the videos I wish we had done differently. I wish we had done them straight without “last week you did X”, or interviews with curators etc. I wish I had had the insight to bring in the right people or to make it more long term useful.

A4 – Sian) for our second run we did make changes. We refused to make videos the first time, we were being hard line. But the dominent comment online were “where are the professors” and “where are the videos” so we made introductory videos for each week. That was the most significant change.

And with that a really interesting afternoon is complete with thanks to organiser Claire Wright, and to the Royal Society of Edinburgh for providing funding for the event.

Find out more

Share/Bookmark

Liveblog: Geospatial in the Cultural Heritage Domain, Past, Present & Future

Today we are liveblogging from our one day event looking at the use of geospatial data and tools in the cultural heritage domain, taking place at Maughan Library, part of Kings College London. Find out more on our eventbrite page: http://geocult.eventbrite.com/

If you are following the event online please add your comment to this post or use the #geocult hashtag.

This is a liveblog so there may be typos, spelling issues and errors. Please do let us know if you spot a correction and we will be happy to update the post. 

Good morning! We are just checking in and having coffee here at the Weston Room of the Maughn Library but we’ll be updating this liveblog throughout the day – titles for the presentations are below and we’ll be filling in the blanks throughout the day.

Introduction

Stuart Dunn, from Kings College London is just introducing us to the day and welcoming us to our venue – the beautiful Weston Room at the Maughn Library.

James Reid, from GECO is going through the housekeeping and also introducing the rational for today’s events. In 2011 JISC ran a geospatial programme and as part of that they funded the GECO project to engage the community and reach out to those who may not normally be focused on geo. Those projects, 11 of them in total, cover a huge range of topics and you can read more about them on the GECO blog (that’s here). We will be liveblogging, tweeting, sharing the slides, videoing etc. and these materials will be available on the website. Many of you I know will have directly or indirectly received from JISC for projects in the past so hopefully you will all be familiar with JISC and what they do.

And now on with the presentations…

Michael Charno, ADS, Grey Literature at the ADS

I’m an application developer for the Archeology Data Service and I’m going to talk a bit about what we do.

We are a digital archive based at the University of York. We were part of the Arts and Humanities Data Service but that has been de-funded so now we sit alone specialising on Archeology data. And we do this in various ways. This includes disseminating data through our website.

The kind of maps we use in Archeology, there’s a long tradition of using maps to describe locations of events, for places, for communities etc. We use GIS quite a bit for research in the discipline. We have a big catalogue of finds and events – we have facets of What, Where and When. Where is specifically of interest to us. We mainly use maps to locate points on a map to locate times, events, finds etc. We also have context maps – just show people the location in which an item was found. We also have a “clicky map” but actually this is just a linked image of a map to allow drill down into the data.

One step up from that we use a lot of web maps, some people call them web GIS. You can view different layers, you can drill down, you can explore the features etc. But this is basic functionality – controlling layer view, panning, zooming etc. With all of these we provide the data to download and use in desktop GIS – and most people use the data this way, primarily I think this is because of usability.

And more recently we’ve been looking to do more with web maps. But we haven’t seen high use of these, people still tend to download data for desktop GIS if they are using it for their research. We have done a foolblown web GIS for the Framework Stansted project – there was a desktop standalone ESRI version. But they wanted a web version and we therefore had to replicate lots of that functionality which was quite a challenge. But again we haven’t seen huge usage, people mainly use the data on their desktop applications. I think this is mainly because of the speed of using this much data over the web. But the functionality is web.

We have found that simplicity is key. But we think that Web GIS isn’t realistic. We aren’t event sure Web Mapping is that realistic. If people are really going to use this data they are going to want to do this on their own machines. We thought these tools would be great for those without an ESRI licence, but there are now lots of good open source and free to use GIS – Quantum in particular – so we increasingly discourage people NOT to give us money to great web GIS. Instead we’re looking at an approach of GeoServer spatial database in Oracle to disseminate this data.

Issues facing the ADS now is the long term preservation of data and mapping (ARCIMS is no longer supported by ESRI for instance); usability – we can upgrade these interfaces but making changes also changes the usability, can be frustrating for users; proprietary technology – concern is around potential lock in of data so we are moving to make sure our data is not logged in; licensing – this is a can of worms, talk to Stuart Jeffrey at the ADS if you want to know more about our conerns here; Data – actually we get a lot of poor quality or inconsistent data and that

ARENA project – a portal to search multiple datasets. This was a What Where When key terms. The what was fine – we used standard method here. When was challenging was OK. But Where was a bit of an issue, we used a  box to select areas. We tried the same interface for the TAG – Transatlantic Archaeology Gateway service – but this interface really didn’t work for North America. So we wanted to be able to search via multiple boxes so we want to do this in the future

ArcheoTools – we wanted to analyse texts including grey literature. There was spatial information we could easily pull out and plot. Modern texts OK but older texts – such as those of the Society of Antiquaries of Scotland – were more challenging. The locations here include red herrings – references to similar areas etc.  We partnered with the Computer Science Department at the University of Sheffield for the text mining. Using KT/AT extension and CDP matching we had about 85% matches on the grey literature. We also tried EDINA’s GeoCossWalk was even better accuracy – only 30 unresolved place names. I think we didn’t use the latter in the end because of disambiguation issues – a challenge in any work of this type. For instance when we look at our own data it’s hard to disambiguate Tower Hamlets from any Towers in any Hamlets…

Going back into our catalogue Arcsearch – you can drill through area sizes – we were able to put this grey literature into the system at the appropriate level. We also have new grey literature being added all the time, already marked up. So this lets us run a spatial search of grey literature in any area.

What we saw when we rolled out the ability to search grey literature by location – we saw a spike in the download in grey literature reports. Although Google was certainly trawling us and that will throw the figures. But definitely useful for our users too and a spike in their use as well.

Again looking at ArcSearch. One of the issues we have is the quality of the records. We have over 1 million records. We ingest new records from many suppliers – AH, county councils etc. and add those to our database. We actually ran a massive query over all of these records to build out own facet tree to explore records in more depth. We want to capture the information as added but also connect it to the correct county/parish/district layout appropriate. We also have historical counties – you can search for it but it can be confusing, for instance Avon doesn’t exist as a county anymore but you will find data for it.

The other issue we fine is that the specific coordinates can end up with points being plotted in the wrong county because the point is on the border. Another example was that we had a record with a coordinate for Devon but it had an extra “0″ and ended up plotted off the coast of Scotland!

I know that Stuart will be taloking about DEEP later which is great, we would love to have a service to resolve placenames for our future NLP so that we can handle historical placenames, spatial queries and historic boundaries. It would be nice to know we remain up to date/appropriate to date as boundaries change regularly.

The future direction that we are going in is WMS publishing and consumption. For instance we are doing this for the Heritage Gateway. Here I have an image of Milton Keynes – not sure if those dots around are errors or valid. We are putting WMS out there but not sure anyone’s ready to consume that yet. We also want to consume/ingest data via WMS to enrich our dataset, and to reshare that of course.

And finally we are embarking on a Linked Data project. We currently have data on excavations as Linked Data but we hope to do more with spatial entities and Linked Data and GeoSPARQL type queries. Not quite sure what we want to do with that because this is all new to us right now.

Find out more:

  • http://archaelogydataservice.ac.uk/
  • @ADS_Update
  • @ADS_Chatter
Q&A
Q1: It seems like your user community is quite heterogenous – have you done any persona work on those users? And are there some users who are more nieve?
A1: We’ve just started to do this more seriously. Registration and analytics let us find out more. Most are academics, some are commercial entities but the largest group are academics. I think both groups are equally nieve actually.
Q2: Why ORacle?
A2: Well the University has a license for it. We would probably use PostGRES if we were selecting from scratch.

Claire Grover, University of Edinburgh, Trading Consequences

This is a new project funded under the Digging Into Data programme. Partners in this are the University of Edinburgh Informatics Department, EDINA, York University in Canada and University of St Andrews.

The basic idea is to look at the 19th century trading period and commodity trading at that time, specifically for economic and environmental historical research. They are interested in investigating that increase in trade at this time and the hope is to help researchers in this work, to discover novel patterns and explore new hypothesis.

So if we look at a typical map a historian would be interested in drawing. So if we look at Cinchona, it is the plant from which Quinine derives and it grows in South America but they began to grow it in India to meet demand at the time. Similarly we can look at another historians map of the global supply routes of West Ham factories. So we want to enable this sort of exploration across a much larger set of data than the researchers could look at themselves.

We are using a variety of data sources, with a focus on Canadian natural resource flows to test reliability and efficacy of our approach and using digitised documents around trading within the British Empire. We will be text mining these and we will populate a georeferenced database hosted by EDINA, and with St Andrews building the interface.

Text mining wise we will be using the Edinburgh GeoParser which we have developed with EDINA and which are also used in the Unlock Text service. It conducts Named entity recognition – place names and other entities – and we will be adding commodities for Trading Consequences – and then there is a Gazetter look up using Unlock, Geonames, and Pleides+ which has been developed as part of the PELAGIOS project. The final stage is georesolution which selects the most likely interpretation of place names in context.

So to give you some visuals here is some text from Wikipedia on the Battle of Borosa (a random example) as run through the Edinburgh GeoParser. You can see the named entity recognition output colour coded here. And we can also look at the geo output – both the point it has determined to be most accurate and the other possible candidates.

So what exactly are we digging for in Trading Consequences? Well we want to find instances of where text of trade-related relationships between commodity entities, location entities, and date entities – what was imported/exported from where and when. Ideally we also want things like organisations, quantities and sums of money as part of this. And ultimately the historians are keen to find information on environmental impact of that trade as well.

Our sources are OCR textual data from digitised datasets. We are taking pretty much anything relevant but our primary data sets are the House of Commons Parliamentary Papers, Canadiana.org and the Foreign and Commonwealth office records at JTOR. Our research partners are also identifying key sources for inclusion.

So next I am going to show you some very very early work from this project. So we’ve down some initial explorations of two kinds of data using our existing text mining toolset – primarily for commodity terms to assist in the creation of ontological resources – we want to build a commodity ontology. And we’ve also looked at sample texts from our three main datasets. So we we have started with WordNet as a basic commity ontology to use as a starting point. So in this image we have locations marked up in purple, commodities in green. We’ve run this on some Canadiana data and also on HCCP as well.

So from our limited starting sample we can see the most frequent location-commodity pairs. The locations look plausible on the whole. The commodities look OK but “Queen” appears there – she’s obviously not a commodity. Similarly “possum” and “air” but that gives you a sense of what we are doing and the issues we are hoping to solve.

The issues and challenges here: we want to transform historias’ understanding but our choie of sources may be biased just by what we include and what is available. The text mining won’t be completely accurate – will there be enough redundancy in the data to balance this? And we have specific text mining isues: loq level text quality issues, isolating referencing issues, French language issues etc. And we have some georeferencing issues.

So looking at a sample of data from Canadiana we can see the OCR quality challenges – we can deal with consistent issues – ‘”f” standing in for “ss” for instance – but can’t fix gobbledegook. And tables can be a real nightmare in OCR so issues there.

Georeferencing wise we will be using GeoNames as a gazeteer as it’s global but some place names or their spellings have changes – is there an alternative? We also have to segment texts into appropriate units – some data is provided as one enormous OCR text, some is page by page. Georesolution assumes each text is a coherant hole and each place name contributes to the disambiguation context for all of the others. And the other issue we have is the heuristics of geoparsing. For modern texts population information can be useful for disambiguation. But that could work quite badly/misleadingly if applying this to 19th Century texts – we need to think about that. And we also need to think about coastal/port records perhaps being weighted more highly than inland ones – but how do you know that a place is/was a port. We’ve gone someway towards that as James has located a list of historical ports with georeferences but we need to load that in to see how that works as part of the heuristics.

Humphrey Southall, University of Portsmouth, OldMapsonline.org

I wanted to do something a big controversial. So firstly how many of us have a background in academic discipline of geography? [it’s about five of those in the room]. A lot of what’s going on is actually about place, about human geography. I think GIS training warps the mind so I wanted to suggest this issue of Space vs. Place.

There is a growing movement towards using maps both for resource discovery and visualisation. But it does lead to inappropriate use of off-the-shelf GIS solutions. There are 3 big problems: Map based interfaces are almost entirely impenitrable to search engines but they are how most people use information and discover things – the interface is a barrier, but that doesn’t mean scrapping them; mapping can force us into unjustificable certainty about historical locations; and this isn’t actually how most people think about the world – people are confused by maps, they can handle textual meaning of place.

So, looking at locational uncertainty in the past. Cultural Heritage information does not include co-ordinates. They have geographical names. Even old maps are highly problematic as a source of coordinates. But converting toponyms to coordinates gets more problematic as we move back in time. 19th and 20th century parishes have well-defined boundaries that are well-mapped – but still expensive to computerise, my Old Maps project has just spent £1million doing this. Early modern parishes had clear boundaires but few maps so we may know only the location of he admin center, earlier than that and things become much more fuzzy.

If we look a county records th

Geographical imprecision in the 1801 census – it’s a muddle, it’s full of footnotes.

Geo-spatial versus Geo-semantic approaches. GIS/Geo-spatial approaches privilege coordinated is all about treating everything as attributes of coordinate data. By comparison Geo-semantic approaches, descriptive of place, seem

Examples of sites with inappropriate use of geo-spatial technology: Scotland’s Places: has a search box for coordinates – who on earth does that! But you can enter placenames. Immediately we get problems, 6 matches and first 2 are for small Glasgow – city only, and then there are 4 for Glasgow as a wider area. this is confusing for the user, which do we pick? Once we pick the city we get a list of parishes which is confusing too, and we encounter an enormous results set, and most of what we get isn’t information about Glasgow but for specific features near Glasgow. This is because at the heart this system has no sense of place – it just finds items geolocated near features of Glasgow. I could show the same for plenty of other websites.

For an example of an appropriate sense of space – HistoryPin, who are speaking later, as images have an inherent sense of location. Another example is Old Maps Online.

Geo-semantics – geography as represented as a formal set of words. This is about expressing geographic traits formally – IsNear, IsWithin, IsAdministrativelyPartOf, Adjoins. Clearly GIS can express some of these relationships more fully  – but only sometimes and assuming we have the information we need there.

One problem we had on the Vision of Britain project was how to digitise this material. We really had to deliver to the National Archives. Frederick Youngs’ Guide to the Local Administrative Units of England – no maps, no coordinates, two volumes – is a fantastic source of geographical information. This is used in Old Maps Online. There is a complex relationship. Using visualisation software on the structure we built from Youngs you can find out huge amounts about that place. One point to note is that this is not simply one academic project. I’ve shown you some of the data structure of the project but it’s not about just one website. But we do have huge amounts of traffic – up at 140k ish unique users a month. So lets do a search for a village in Britain – suggestion from the crowd is “Flushing” apparently… Google brings back Vision of Britain near the top of the list for any search of “History of… ” for any village in Britain. I’m aware of very few cultural heritage sector websites that do this. We did this partly by having a very clear very semantically structured information behind the site there and available for crawling. We will be relaunching the site with some geospatial aspects added, but we also want to make our geosemantic information more available for searchers. We use a simple GeoParser service, mainly for OldMapsOnline and the British Library. We will be making that public. And we rank that based on frequency of place name, a very different approach to that outlines.

Q&A

Q1) I suspect that the reason Flushing didn’t get you to the top of the list is because the word has another meaning. What happens with somewhere like Oxford where there are many places with the same name?

A1) Well it’s why I usually include a county in the search – also likely to help with Oxford but of course for bigger places we have much more competition in Google. I think the trick here is words – Vision of Britain includes 10 million words of text.

Q2) Is this data available as an API? Or are all maps rastorised?

A2) Most of our boundaries are from UK Borders free facility for UK HE/FE. We have historic information. In terms of API we are looking at this. JISC have been funding us reasonably well but I’m not entirely happy with the types of projects that they choose to fund. We have put that simple GeoCoder live as we needed it. Some sort of reverse geocoder wasn’t too hard.

James: we support an internal WFS of all of the UK Borders data and data from Humphrey

Comment: We’ve used OS data from EDINA for our data. I was hoping there was something like that we could use over the web

James: I think it’s very much about licencing in terms of the OS data, for Humphrey’s data it’s up to him.

Humphrey: We haven’t been funded as a service but as a series of digitisation projects and similar, we make our money through advertising and it’s unclear to me how you make money through advertising for a Web service.

Stuart Nicol, University of Edinburgh, Visualising Urban Geographies

I’m going to be talking about the Visualising Urban Geographies project which was a collaborative project between the University of Edinburgh and the National Library of Scotland funded by the AHRC.

The purpose of the project was to create a set of geo-referenced historical maps of Edinburgh for student learning purposes, to reach a broader public through the NLS website, to develop tools for working with abd visualising research on maps and to trial a number of tools and technologies that could be used in the future.

The outputs were 25 georeferenced maps of Edinburgh from 1765-1950 (as WMS, TMS and downloadable JPG, JGW) as well as a suite of digitised boundary polygons (ShapeFiles and KML), We have used various individual maps as exemplars to see what might be possible – 3D boundaries etc. We also documented our workflows. and finally we created a series of web tools around this data.

The web tools are about quick wins for non GIS specialist – ways to find patterns and ideas to build on, not mission critical systems. To do this quickly and easily we inevitably have a heavy reliance on Google. A note on Address based history – researchers typically gather a lot of geographic data as addresses, as text. And it can be hard to visualise that data geographically so anything that helps here is useful.

So looking at our website – this is built on XMaps with Google Maps API and tile map service for historic maps. You can view/turn on/off various layers, you can access a variety of tools and basemaps. This includes usual Gooogle Map layers, also the Microsoft Virtual Earth resources as well as OpenStreetMap. So you can view any of these maps over any of these layers. You can also add user generated data for this – you just need xml or kml or rss link to use in the tool. The Google Street View data can be very useful as many buildings in Edinburgh are still there. We have a toolbox that lets you access a variety of tools to use various aspects of the map, again just using the Google Address API. We use the Elevation API to get a sense of altitude. We’ve also been looking at the AddressingHistory API – geocoding historical addresses. So here I’m looking in the 1865 directory for bakers. And I can plot those on the map.

One of the main tools we wanted to provide was a geocode tool for their research. Our researchers have this long list of addresses from different sources. So they simply copy from their spreadsheet into the input field in our tool, the API will look for locations, and you get a list and also get a rough plot for those addresses.  And we’ve built in the ability to customise that interface. This uses Google Spreadsheets and your own account. So you can create your own sets of maps. To edit the map we have the same kind of interface on the web. You can also save information back to your own Google account. And we also have an Add NLS data facility – using already digitised and georeferenced maps from the NLS collections.

You can publish this data via the spreadsheets interface and that gives you a URL that you can share which takes you to the tool.

So we went to a very lightweight mashup idea. We use Google Maps, Geocoding, Elevation, Visualisation, Docs & Spreadsheets, Yahoo geocoding, NLS Historic Mapping, AddressingHistory as our APIs – a real range combined here.

But there are some issues around sustainability and licensing here. We use Google Maps API V2 and that’s being depreciated. What are the issues related to batch geocoding rom Google? Google did stop BatchGeo.com from sharing batch coded data as it broke third party terms so that’s a concern. There is a real lack of control over changes to APIs – the customise option broke a while ago because the Google Spreadsheet API changed. It was easy to fix but it took a while to be reported, you don’t get notified. Should we use HTTP or API? Some of the maps we use are sitting on a plain HTTP server – that means anyone can access it, speed can be variable if heavily used. The NLS have an API which forces correct attribution but that would take a lot of work to put in place. And also TMS of WMS? We have used TMS but we know that WMS is more flexible, more compliant.

And we face issues around resources and skills. We can forget that we have benefitted from our partnership with NLS with access to their collection, skills, infrastructure and all those maps. One of our more ambitious aims was that our own workflow might help other researchers do the same thing in other locations. But this isn’t a easy as hoped. We have a colleague in Liverpool, and a colleague in Leicester both using the tools but both constrained by access to historical maps in usable formats. And they don’t have skills to deal with that themselves. Who should be taking the lead here? National libraries? Researchers?

In terms of what we have learned in the project we have found it useful to engage with the Google tools and APIs as it allowed us to build functional tools very quickly but aware that there are big drawbacks here and limitations. But we have successfull engaged researchers and the wider community – local history groups, secondary schools, local history groups etc.

Jamie McLauglin, University of Sheffield, Locating Londons Past

Locating London’s Past was a six month JISC project taking a 1746 map and georeferencing it and visualising data from textual sources and data from the period on this map. We also ran a gazeteer derived from the 1746 map, and it was also rectorised for us as well so you can view all the street networks etc. Our data sources contained textual descriptions of places and we regularised these for spellings and compound names. And then these were georeferenced to show on our map.

What’s interesting exploring the data is to search, say, or Murders – Drury Lane has a lot which is perhaps not surprising. But murders

We used Google Maps as it was so well known, it seemed like the default choice. We didn’t think too deeply about that. It does do polygons and custom markers. And it does let you do basic GIS – you can measure distance, polygons etc. And it’s well known as the Google conventions. Like the previous presention this was a “light weight mash ups” approach. What can’t be underestimated is the usefulness of the user community – huge group to ask if you have a question. The major downside of course is the usage limit – 25k uploads a day for free, after that you have to pay. These new terms came in just at the end of the project. It’s a reasonable thing and you have to have 90 days at that level so spikes are OK. But it’s expensive if you go over your limit: $4 for additional 1000 loads. At really high levels it’s $8 for additional 1000 loads. there is a very vague/sketchy educational programme which we’d hope we’d quality for.

So retrospectively we’ve looked at alternatives. OpenLayers – I think Vision of Britain uses this – and it uses OGC standards, you can loads rastor or vector layers from anywhere and you’re not trapped into a single projection. PolyMaps is another alternative I looked at, it uses vector layers and gets the browser to do all of the work. PolyMaps all in a 30k JavaScript file. Mind you we always envisioned using a rastor version. But I think we could make a very cool vectorised version for Locating London’s Past – the 1746 map is pretty but not essential. And Leaflet is also available, it’s small and sweet and pretty and genuinely open source as well.

When you push more and more data over the web you are forcing the users browser to do a lot of work. Locating London’s Past relies on JavaScript and the users browser but it can be slow or unreliable dependent on your connection. Another challenge is geocoding textual information or sources. Firstly placenames are not often unique. In London there are lots of roads with the same name – there are 5 Dean Streets. And variant spellings aren’t reliable – they can be entirely different places. In 1746 there are two Fleet Streets and finding a tiny alley off one of them is real challenge. We didn’t leave anything like enough time to geocode the data. Our machine approach was good but our researchers only really wanted 100% accuracy so you need humans disambiguating your geocoding.

We should also have thought further about exports and citations. The standard way to store and cite a website is a bookmark. It’s non trivial to store Web GIS data as it’s so huge. If you work purely in JavaScript you’ll find that difficult without hacks or HTML5. And you can have data that looks clean. Here some data on plague victims has been moderated. But the boundary set we have extends into the river – it isn’t accurate and that impacts on population, land data, etc. Problems don’t become apparent from the text.

The three big lessons from us: Keep it simple – we tried to do too much, too many data sets for the time, when the design was kept simple it was successful; garbage in = garbage out – geocoding isn’t magical! They are much much stupider than a human no matter how good they are; Use open platforms – the API terms are worrying, we should have used Open solutions.

James: Perhaps the Google bubble has burst – even FourSquare has moved to other mapping. APIs can be changed whenever the provider likes. And I should add that EDINA runs an open web service, OpenStream, that will let you access contemporary mapping information.

Ashley Dhanani and David Jeevendrampillai,UCL,“Classifying historical business directory data: issues of translation between geographical and ethnographic contributions to a community PPGIS project�

We are trying to focus on the the place of suburbs and the link between suburbs and socio economic change. Why are suburbs important? Well around 84% of British people live in suburbs, we’ve seen the London Mayoral election focusing on suburbs and the Queen’s spending some of her jubilee in the suburbs.

We see small relationships, small changes in functionality etc. in suburbs that can easily be missed. We will talk about material cultural heritage – shapes of houses, directions of roads, paths and routes taken etc. We will relate the very material heritage to socio economic use of buildings/places over time. And we look at meaning – what does that mean socially – to use the post offices at different times in the last 200 years perhaps.

We wanted to do various analyses here. a network analysis to consider the accessibility of particular spaces. And the changes in how people live in these spaces. So if we look at Kingston in a rather manual mapping process looking at network structure. Here we can see in 1875 what is the core area, what was it like to be in these spaces? Again we can see change over time. And we can see the relationality to the rest of the city. This is just part of the picture of these places through time. So from a material perspective we can see how the buildings change – from large semi detached houses to small terraced rows for instance. So we want to bring these information together and analyse them. So here we need to turn these historic structures into something more than a picture, to be able to look at our

We are using software – cheap for academic use – that allows you to batch proof TIFF files and do 80-90% of the work on a good underlying map. You can then really start doing statistics and exploring the questions etc. You can basically make MasterMap for historic periods!

Back to David. We also wanted to relate these networks, roads and buildings to the actual use, what was going on in these buildings at the time. So we just talk the Business Directory Information and georeferenced it to provide points on the map. We need to categorise the types of use in the business information. So we get these rather Damien Hirst style pictures – coloured dots on the road. We had a bit of a debate, me being an anthropologist, of probloematising those categorisations… what is a Post Office? Is it a Financial Service? Is it a Depot? Is it Retail? Is it a Community Service? And the answer obviously is what do you want to get from this data, why are you looking at it in the first place.

So we wanted to know what these elements of the build environment meant. What does a relocated post office mean socially? We wanted to add another layer of information. Archives, memories, photos etc. We are taking the archive and making it digital. But I want to talk a bit about limitations here. Trying to understand a place through point information, looking at a top down map doesn’t include that ephemeral information – the smell of a building perhaps. What we’re doing in this project is bringing in lots of academics from different disciplines and you get very different use of the same data sources. What we’ve found, the gaps that we’ve found between understandings of the data have been very productive in terms of understanding our data, place, and what place means for policy based outcomes. And rather than come to a coherant sense of place, actually the gaps, the debates are very productive in themselves. We are one year in – we have 5 years funding in total – but those gaps have been the most interesting stuff so far.

And this kicking up of dust in the archives has only happened since we’ve been able to turn materials into digital form – they can be digitised, layered up, used together. Whilst this is very productive we will have gaps and slippages of categorisation and highlight our ways of understanding what goes on in place.

Q&A

Q1) What software did you use for this project?

A1) RX Spotlight [not sure I’ve got that down right – comment below to correct!]

Q2) Interesting to hear about the issues with Google Maps – are any of the Open Source, truly free services, better with mobile?

A2) There is an expectation on mobile phones – there’s a project we’re working on with LSE on the Charles Booth property maps – which is hampered by the available zoom levels. There are workarounds, other data providers are part of this option. You have CloudMade based on OpenStreetMap data. We have OpenStream for HE projects.

Humphrey: We planned to use Google geocoder for Old Maps Online but they changed the terms and we expected high usage. We went for OpenStreetMap as truly free, but it’s problematic. And so we have implemented our own API from VisionOfBritain. We do use Google Basic and again we are concerned about going over our limits. Using a geocoder does let you mark up data for use with other maps. But if you are using linked data and identifyers and it was Google or similar providing that it would be very concerning.

James: Especially with mobile phones there is a presumption of very large scale. We were involved in the Walking Through Time project and the community wanted Google – the zoom levels killed it. There are issues around technical implementations. Think large scale for mobile. I do know that Google have been thinking of georeferencing as context for other information. Place is something else but implies some geography.

Comment: Leaflet works well on mobile.

James: We will come back to this later – discussing what we are using, what we need, etc.

And now for lunch… we’ll be back soon!

And we’re back…

Chris Fleet, National Library of Scotland, Developments at the NLS

I’m going to be talking about our historic mapping API which we launched about 2 years ago. This project was very much the brain child of Petr Pridal who now has this company Klokan Technologies. The API is very much a web mapping service.

So to start with let me tell you a bit more about the National Library of Scotland. We aim to make our collections available and with maps most of our collection is Scottish but we also have international maps in the collection. There are 46k maps as ungeoreferenced images with zoomable viewer. The geo website offers access via georeference search methods. We’ve been a fairly low budget organisation so we’ve been involved in lots of joint projects to fund digitisation. And there is even less funding for georeferencing so we have joined up with specific projects to enable this. For instance we have digitised and georeferenced the Roy Military survey map of late 18th century, town plans of Ordnance Survey, aerial photographs of the 1940s, and Bartholomew mapping – we are fortunate to have a very large collection of these. And we’ve been involved in various mashup projects including providing maps for the Gazeteer project for Scotland.

So in early 2000 Petr had this idea about providing a web mapping service. There were several maps already georeferenced – 1:1 million of the UK from 1933 and we had several other maps at greater detail from similar areas. Although we use open source GIS and Cube GIS we have found that ArcGIS is much easier for georeferencing, adding lots of control points, and dynamic visualising of georeferenced maps. We used Petrs MapTiler (this has now been completely rewritten in C++ and is available commercially and runs much faster) and TileServer. These tools allow you to provide coordinates that allow you to spherisize your map for use with tools like Google Maps or Bing.

We launched in May 2010 with examples for how to use the maps in other places and contexts. We put the maps out under Creative Commons Attribution license – more liberal than the NLS normally licences content.

Usage to date took a while to take off, most of our users are from a UK domain – unlike most of our maps collection – and most of our use has been in the last year or so. I’ve divided usage into several categories – recreation, local history, rail history, education etc.

Bill Chadwick run the Where’s the Path website and they use a lot of data – they display our historic maps and other users used the link through the site for other big websites and there’s where lots of the hits have come from. A lot of our phone use has been for leisure – with the maps as a layer in another tool for instance.

Looking at how our maps have been used the variety has been enormous – leisure walkers, cyclers, off-road driving, geocaching as well! We also have lots of photographers using our maps. And metal detecting – I had underestimated just how big a users they would be, including the Portable Antiquities Scheme website. And there are many family history users of these maps – for instance the Borders Family History society links to resources for each county in Scotland. There is also the area of specialist history: SecretWikiScotland – security and military sites; airfield information exchange; Windmill World; steamtrain history sites etc. And another specialist area: SABRE – the group for road history, if you’ve ever wondered about the history of the B347 say, they are the group for you. They have a nice web map service to ingest multiple maps including our maps API. And finally Stravaiging is to meandor and you’ll find our maps there too.

Education was quite a small user of our maps. EDINA and others already cater to this group. But there was a site called Juicy Geography aimed at secondary school children that uses them. And the Carmichael Watson project, based at Edinburgh University, shows georeferenced transcripts against our historic maps.

We know OpenStreetMap has been using out maps though they don’t show up in our usage data.Through them we’ve connected to a developer in Ireland. This is one of those examples where sharing resources and expertise has been useful for our own benefit and that of the OpenStreetMap Ireland coverage.

The NLS is also now using Geo MappinG Service and GeoReferencing, etc. And we now have a mosaic viewer for these maps. Through the API and other work we’ve been able to develop a lot of map, including 10 inch to a mile series for the UK. And we are working on the 1:25k maps. We hope to add these to our API in due course.

In terms of sustainability the NLS has and continues to support the API. We are looking at usage logging for large/commercial users – some users are huge consumers so perhaps we can licence these types of use. Ads perhaps?

Top Tips? Well firstly don’t underestimate how large and diverse the “geo” community is. Second don’t overestinate the technical competance of the community – it is very variable. And finally don’t underestimate the time required the administer and sustain the application properly – we could have worked much harder to get attention through blogs, tweets, etc. but it requires more serious time than we’ve had.

Q&A

Q1) One of your biggest users are outside recreation – why using historic mapping?

A1) I think generally they are using both, using historic maps as an option. But there could be something cleverer going on to avoid API limitations. If you are interested in walking or cycling you can get more from the historic maps from 60 years ago than from modern maps.

Rebekkah Abraham, We Are What We Do, HistoryPin

I am the content manager for HistoryPin. HistoryPin, as I’m sure you will be aware is to let people to add materials to the map. It was developed by We Are What We Do and we specialise on projects that have real positive social impact. The driver was the growing gap between different generations. Photographs can be magical for understanding and communicating between generations. A photograph is also a piece of recorded history, rich in stories and photographs – they belong to  a particular place at a particular time. If you then add time you create really interesting layers and perspectives of the past. And you can add the present as an additional layer – allowing compelling comparisons of the past and the present.

So historypin.com is the hub for a set of tools for sharing historical content in interesting ways and engage people with it. It’s based on Google Maps, you can search by place and explore by time. You can add stories, material, appropriate copyright information etc. and the site is global. We have around 80k pieces of content and are working with various archives such as UK National Archives, National Heritage etc. And we are also starting to archive the present as well.

Photographs can be combined with audio and video – you can pin in events, audio recordings, oral history, etc. We’re also thinking about documents, text, etc. and how this can be added to records. You can also curate, you can create talks through materials and tour others through. And here you can see the mapping and timeline tools can be very nice here. Again you can include audio as well as images and video.

We also have a smartphone app for iPhone, Android and Windows and that lets you go into the streetview to engage with history, you can add images and memories to a place you currently are. And you can fade between present camera view and historic photographs, and you can choose to capture a modern version of that area – great if an area lacks street view but you are also archiving the present as well.

At the end of the March we will launch a project called HistoryPin Channels – this will let you customise your profile much more, to create collections and tools, another way to explore the materials.  And to see stories on your content. This will also work with the smartphone app and be embeddable on your own website.

And we want to open HistoryPin to the crowd, to add tags, correct location, etc. so that people can enhance HistoryPin. You could have challenges and mysteries – to identify people in an image, find a building etc. Ideas to start conversations. A few big questions for us: how do you deal with objects from multiple places and multiple times; and how do you deal with precision

Pinning Reading’s History – we partnered with Reading Museum to create a hub and an exhibition to engage the local community. Over 4000 items were pinned, we had champions out engaging people with HistoryPin. The value is really about people coming together in small meaningful ways.

Q&A

Q1) We’ve been discussing today that a lot of us work with Google APIs but don’t communicate with them. I understand that HistoryPin have a more direct relationship

A1) Google gave us some initial seed funding and technical support, everything else is ownd and developed with We Are What We Do.

Q2) Who does uploaded content belong to?

A2) That’s up to the contributors – they select the licence at upload so owneship remains theirs.

Q3) Will HistoryPin Channels be free?

A3) Yes. Everything around HistoryPin will be free to use. We are committed to being not for profit.

Q4) Have you don’t any evaluation on how this works as a community tool/social impact

A4) Yes, there will be a full evaluation of the Reading work on the website in the next few weeks but initial information suggests there have been lasting relationships out of the HistoryPin hub work.

Stuart Macdonald, University of Edinburgh, AddressingHistory

This project came out of a community content strand of a UK Digitisation programme funded by JISC. The project was done in partnership with the National Library of Scotland and with advice from the University of Edinburgh Social History Department and Edinburgh City Council’s Capital Collections. This was initially a 6 month project.

The idea was to create an online crowdsourcing tool which will combine data from historical Scottish Post Office Directories (PODs) with contemporaneous maps. These PODs are the precursors to phone directories/Yellow Pages. They offer fine-grained spatial and temporal view on social, economic and demographic circumstances. They provide residential names, occupations and addresses. They have several sub directories – we deal with the General Directory in our project. There are also some great adverts – some fabulous social history resources.

Phase 1 of this work focused on 3 volumes for Edinburgh (1784-5, 1865, 1905-6) and historic Scottish maps geo referenced by the NLS. W

The tool was built with OpenLayers as web-based mapping client and it allows you to move a map pin on the historical map to correct/add a georeference for entries. Data is held in PostGres database and uses the Google georeferencer to find the location of points on the map.

The tool had to be usable for users of various types – though we mainly aim at local historians etc. We wanted a mechanism to check user generated content such as georeferences, name or address editsannotations. And it was deemed that it would be useful to have the original scanned directory page. Amplification of both tool and API via Social Media channels – blog, Twitter, Flickr etc.

So seeing a screenshot here of the tool you can see the results, the historic map overlay options, the editing options, the link to view the original scanned page and three download options – text, KML,

Phase 2 sought to develop functionality and to build sustainability by broadening geographic and temporal coverage. This phase took place from Feb-Sept 2011. We have been adding new content or Aberdeen, Glasgow, Edinburgh all for 1881 and 1891 – those are census years and that’s no coincidence. But much of phase 2 was concerned with improving the parser and improving performance. Our new parser has a far improved success rate. Additional features added in phase 2: spatial searching via a bounding box; associate map pin with search results; search across multiple addresses; and we are aiding searching by applying Standard Industrial Classifications (SIC) to professions.

We have also recently launched an Augmented Reality access via the Layer phone app. This allows you to compare your current location with AddressingHistory records – people, professions etc – from the past. This is initially launched for Edinburgh but we hope to also launch for Aberdeen and Glasgow as well as other cities as appropriate. You can view the points on a live camera feed, or view a map. Right now you can’t edit the locations yet but we’re looking at how that could be done. You can also search/refine for particular trade categories.

Lessons learned. I mentioned earlier that this sort of project is like GalaxyZoo have 60k galaxies, we only have 500k people in Edinburgh. That means we’ve really begun thinking carefully about what content has interest to our potential “crowd” and the importance of covering multiple geographic locations/cities. In this phase we have been separating the parsing from interface and back end storage – this allows changes to be implemented without effecting the live tool. We’ve been externalising the configuration files – editable XML-based files to accomodate repeated OCR and content inconsistencies, run with the POD parser to refine parsed content. Persing and refining process is almost unending – a realistic balance needed to be struck between what should be done by machine in advance. And we need to continue to consult with others interested in this era, and using the PODs already.

In terms of sustainability the tool is ioenly available. There are some business models we’ve been considering: revenue generation via online donations, subscription models, freemium possibilities, academic advertising. We welcome your suggestions.

Phase 2 goes live very soon.

Success of these projects is about getting traction with the community – continued and extended use by that community. Hopefully adding new content will really help us gain that traction.

James: It’s worth saying that before the project we looked at the usage of the physical PODs – they are amongst the most used resources in the city libraries, this stuff is being used for research purposes which was one of our driving motivations.

Q&A

Q1) Presumably you have genealogists are using this – what feedback have you had?

A1) I think population and having multiple years – to track people through time. We had really good feedback but usage has been modest so far.

Nicola) Genealogists want a particular area at a particular time and that’s when you capture their interest. It’s quite tricky because that’s the one thing they are interested in and all that material is available potentially but you need their engagement to be worth the labour intensive process of adding new directories, but they want their patch before they engage, so there is a balance to be struck there.

And with that we are onto the next session – we are going to grab a coffee etc. and then join a wee breakout session. I’ll report back from their key issues but won’t be live blogging the full discussions.

So…

1. GAP Analysis

  • Use google geo products if you must but beware
  • Think twice about geo referencing
  • There are other geocoding tools
  • There are text parsing tools

2. Mobile futures

  • Do I want to go native or not? Theres a JISC report from EDINA on mobile apps and another set of guidance coming out soon.

Kate Jones, University of Portsmouth, Stepping Into Time

I am a lecturer in Human Geography at Portsmouth but I did my PhD at UCL working on health and GIS. But I’m going to talk today about data on bomb damage in London and how that can be explored and clustered with other data to make a really rich experience.

And I want to talk to you first about users, and the importance of making user friendly mapping experiences as that’s another part of my research.

I’m only two months into this project but it’s already been an interesting winding path. When you start a geography degree you learn “Almost everything that happens, happens somewhere and knowing where something happens is critically important” (Longley et al 2010). So this project is about turning data into something useful, creating information that can be linked to other information and can become knowledge.

For user centred design you start by designing a user story. So we have Megan, a student of history, and Mark, a geography undergraduate, or Matthew, an urban design post-graduate. For each user we can identify the tools they will be familiar with – they will know their own softwares etc. But they all use Google, Bing, Web 2.0 type technology. Many of them have smartphones. Many have social networking accounts. I was really surprised that this generation would be really IT literate – they are fine with Facebook but really quite intimidated by desktop GIS. Important to have appropriate expectations of what knowledge they have and what they want to do. This group also learn best with practical problems to solve, and they love visual materials. And they can find traditional lectures  quite boring.

There are challenges faced by the user:

(1) determining available data – how do we make sure we only do one thing once, rather than replicating effort

(2) understanding the technology, concepts and methods required to process and integrate data

(3) implementing the technical solutions – some solutions are very intimidating if you are not a developer. I used an urban design student on a previous usability project – he downloaded the data from Digimap but couldn’t deal with even opening the data in a GIS, eventually did it in Photoshop which he knew how to use, and hand colouring maps etc.

So we want to link different types of data related to London during the Blitz. It’s aimed at students, researchers and citizen researchers – any non commercial use. We want to develop web and mobile tools so that you can explore and discover where bombs fell and the damage caused – and the sorts of documents and images linked to those locations. For the first time this data will be available in spatially referenced form, allowing new interpretations of the data.

We will be creating digital maps of the bomb census – the National Archive is scanning these and we will make these spatially references. We will add spatial data for different boundaries – street/administrative boundaries etc. And then exploring linkage to spatially referenced images. Creating a web mapping application for a more enriched and real sense of the era.

So what data to use? Well I’m a geographer not a historian but my colleague on this project at the National Archive pulled out all of the appropriate mapping materials, photographs etc. It’s quite overwhelming. We will address this data through two types of maps:

1) Aggregate maps of Nightly Bomb Drops during Blitz

2) Weekly records – there are over 500 maps for region 5 (central london), so we are going to look at the first week of the Blitz and look at 9 maps of region 5.

So here is a map of the bomb locations – each black mark on the map is a bomb – when there are a lot it can be hard to see exactly where the bomb landed. We will be colour coding the maps to show the day of the week the bomb felt and will show whether it’s a parachute or an oil bomb, drawn from other areas of the archive.

The project has six workpackages and the one that continues across the full prokect is understanding and engaging users – if you want to be part of this usability work do let me know.

We have been doing wireframes of the interface using a free tool called Pencil. We will use an HTML prototype with users to see what will work best.

So our expected project outcome is that we will have created georeferenced bomb maps – a digital record of national importance. This data will be shared with the National Archives – reducing the use of the original fragile maps and aid their preservation. We are also opening up the maps so that we remove the specialist skills to prepare and process data – only need to do one thing once. We’ll be sharing the maps through ShareGeo. And there will then be some research out of these maps – opportunities to look at patterns and compare data to social information etc.

Learning points to date will hopefully be useful for other

Before the National Archives I had a different project partner who pulled out of the project as they were not happy with the licenceing arrnagements etc – I’ve blogged suggestions on how to avoid that in the future: http://blitzbomcensusmaps.wordpress.com/2012/02/09/.

Scanning and Digitising Delayes – because lots of JISC projects were requesting jobs from the same archive! But I negotiated 2 scans to use as sample data for all other work, final data can then be slotted in when scanned in June. Something to bear in mind in digitisation projects, especially where more than one project in the same stream with the same archive/partner.

Summary: Linking historic data using the power of location. If you are interested in being part of our user group – please contact me via the blog or as @spatialK8

Natalie Pollecutt and Deborah Leem, Wellcome Library, Putting Medical Officer of Health reports on the map: MOH Reports for London 1848-1972

Nathalie: This is a new project. I heard about a tool called mapalist as a tool, I ended up using Google Fusion Tables – it was free, easy to use, lots of support information, and felt easy to use. I started off by doing a few experiments with the Google Fusion Table. So this first map is showing registered users, then I tried it out with photography requests to the library – tracking orders and invoice payments. So I showed this off around the office and someone suggested our Medical Officer of Health Reports as something that we should try mapping.

These Reports are discreet – 3000 in total – but they are a great historical record. Clicking on a point brings back the place, the subjects, and a link to view the catalogue record – you can order the original from there.

Deborah: The reports are the key source on public health from mid 19th to mid 20th century. They were produced by Medical Officers of Health in each local authority who produced annual reports. Covering outbreaks of disease, sanitation, etc. Lots of ice cream issues at one point in the 19th century – much concern of health of friends due to poor quality ice cream. They vary in length but the longest are around 350 pages.

Nathalie: On our shelves these are very much inaccessible bundles of papers. I wanted to talk more about the tools I was considering. I tried out mapalist.com (addresses); maptal.es (search for a location); mapbox.com (not free); mashupforge.com; targetmap.com; Unlock (EDINA); Recollector (NYPL); Google Maps API; Google Fusion Table API. In the future I will be trying Google Maps API, Google Fusion Tables and also batch geocoding which you can do from the tables.

Deborah: This is our catalogue records. Our steering committees want to search materials geographically so we are trying to enhance our catalogue records for each report that we are digitising – about 7000 for the London collection in scope. We needed to add various fields to allow search by geographic area and coverage date. And what we are trying to think about is the change in administrative boundaries in London. Significant changes in the 19th century and also changes in 1965 to boroughs. Current areas will be applied but we are still working on the best way to handle historic changes so we hope to learn from today on that.

Nathalie: One of the things we’ve begun to realise, especially today, is that catalogue record isn’t the best place for geographic information. Adding fields for geographic information, and to draw this out of other fields, like the 245 title field, is helpful but we need to find a way to do this better, how do we associate multiple place names?

This was very much an experiment for us. But we need to rethink how to geocode the data from library catalogue records – Google will give you just one marker for London even if there are ten records and that’s not what we’d want as cataloguers. We have learnt about mapping our data – and about how to think about catalogue records as something that can be mapped in some way. Upgrade of catalogue records for Medical Officers of Health Reports – very useful for us to do anyway.

Top tips from us:

  • Test a lot, and in small batches, before doing a full output/mapping – makes it easier to make changes. 3000 is too many to test things really, need to trial on smaller batch.
  • Know where you’ll put your map – it was an experiment. I blogged about it but it’s not on the website, it’s a bit hidden. You need to know what to do with it
  • Really get to know your data source before you do anything else! Unless you do that it’s hard to know what to expect.
Deborah: Our future plan is to digitise and make freely available the 7000 MOH reports via the Wellcome Digital Library by early 2013. And we hope to enhance the MOH catalogue records as well.
Nathalie: Initial feedback has been really positive, even though this was a quick, dirty, experiment.
James: There are people here you can tap – looking at Humphrey re: nineteenth century – and we have some tools that might be useful. We can chat offline. This is what we wanted out of today – exchange and new connections.

Stuart Dunn, KCL, Digital Exposure of English Place-Names (DEEP)

I’m going to talk a bit about the DEEP project, funded under the recent JISC Mass Digitisation call. It’s follow on work from a project with our colleagues on this project at EDINA. This is a highly collaborative project between Kings College Lonon, the University of Edinburgh Language Technology Group, EDINA, and the National Place Names Group at Nottingham.

DEEP is about placenames, specifically historic placenames and changes over time. Placenames are dynamic. And the way places are attested also changes to reflect those changes. The etomological and social meaning of placenames really change over time. Placenames are contested, there is real disagreement over what places should be called. They are documented in different ways. There are archival records of all sorts, from Domesday onwards (and before). And they have been researched already. The English Place Names Society has already done this for us – they produced the English Place Name Survey, there are 86 (paper) volumes in total and these are organised by county. There is currently no hard and fast editorial guidelines in how this was produced so the data is very diverse.

There are around 80 years of scholarship, covering 32 English counties, 86 volumes, 6157 elements, 30517 pages, and about 4 million individual place-name forms but noone yet know how many bibliographic references.

Contested interpretations and etymologies – and some obscene names, like “Grope lane”, help show how contested these are. So we are very much building a gazeteer that will connect and relate appropriate placenames.

The work on DEEP is follow up to the CHALICE project which was led by Jo Walsh and was a project between EDINA and the Language Technology Group at Edinburgh. This extracted important places from OCR text and marked them up in xml. We are adopting a similar approach in DEEP. The University of Belfast is to digitise the Place Names Survey, then the OCR text will be parsed, and eventually this data will go into the JISC UNLOCK service.

We have been trying to start this work by refining the xml processing of the OCR. Belfast’s tagging system feeds the parser that helps identify historic variants, etc. The data model does change from volume to volume which is very challenging for processing. In most cases we have Parish level grid references but the survey goes to township, settlement, minor name and field name levels. And we challenges of varying countries, we have administrative terminology variance. So we are putting data into a Metadata Authority Description Service (MADS) so that we don’t impose a model but retain all the relevant information.

Our main output for JISC will be point data for Unlock. Conceptually it will be a little but like GeoNames – we are creating Linked Data so it would be great to have a definitive URI for one place no matter what the variants in name.

Not only is Google problematic but so are the geographic primatives of points, lines and polygons. Pre-OS there is very little data on geographic associations of place-names; points are arbitraru and dependent on scale; administrative geographies change over time; even natural features can mislead – rivers move over time for instance.

We are talking to people like Vision of Britain both to see if we can feed into that site and if we can use that data to check ours. One of the projects I am very interested in is the Pleides project which has digitised the authoritive map for ancient roman and greek history. This is available openly as Linked Data. That’s what I’d like to see happening with our project, which would include varying names, connectings, bibliographic references, and a section of that data model from MADS classification.

Another important aspect here is crowdsourcing. So we will be working with the Nottingham partners in particular will be working with the enthusiastic place names community to look at correcting errors and omissions in the digitisation and the NLP; to validate our output with local knowledge; add geographic data where it is lacking – such as field names; identify crossovers with other data sources. etc. We will be discussing this at our steering group meeting tomorrow.

And finally a plug for a new AHRC project! This is a scoping study under the Connected Communities on crowdsourcing work in this area,

Q&A

Comment: I would be interested to see how you get on with your crowdsourcing – we work on Shetland Place Names with the community and it would be really interesting to know how you cope with the data and what you use.

James: Are you aware of SWOP in Glagow ? You might be interested in the tools they use might be applicable or useful.

Q1) I would be interested in seeing how we can crowdsource place names from historic maps as well – linking to Geo Rereferncer project maps or Old Maps Online – that could be used to encourage the community to look at the records and some sort of crowdsourcing tool around that.

A1) As you know the British Library’s GeoReferencer saw all 700+ maps georeferenced in four days, there is clearly lots of interest there.

Humphrey: We are proposing a longer term project to the EU more in this area. We haven’t been funded for an API, we’ve done much of what has been discussed today but they are not accessible because of what/how we’ve been funded in the past.

And with that we are done for the day! Thank you to all of our wonderful speakers and very engaged attendees.

See Also

Share/Bookmark

Programme Now Available for Geospatial in the Cultural Heritage Domain – Past, Present & Future

We are delighted to announce our final programme for the Geospatial in the Cultural Heritage Domain, Past, Present & Future (#geocult) event which takes place next week, Wednesday 7th March 2012, at the Maughan Library, Kings College London.

A fantastic programme of speakers will explore the use of geospatial data and tools in cultural heritage projects with breakout discussions and unconference sessions providing opportunity for networking and further discussion of this exciting area.

We are delighted to announce that our speakers for the day will include:

Humphrey Southall of the University of Portsmouth will talk about OldMaps online, which just launched today at the Locating the Past (#geopast) event in London.

Stuart Dunn from Kings College London, talking about the new Digital Exposures of English Place Names (DEEP) project which is building a gazeteer that tracks the changing nature of place names.

Chris Fleet of the National Library of Scotland, and co-author of Scotland: Mapping a Nation, will talk about recent developments at the NLS.

Claire Grover of University of Edinburgh will talk about the new Digging Into Data project Trading Consequences which will use data mining techniques to investigate the economic and environment impact of 19th century trading.

Natalie Pollecutt from the Wellcome Library will be talking about their project: Medical Officers of Health (MOH) Reports for London 1848-1972 which is building a free online data set on public health in London.

Michael Charno, Digital Archivist and web developer at the Archaeology Data Service, will talk about Grey Literature and spatial technologies.

Stuart Nicol of the University of Edinburgh will talk about Visualising Urban Geographies, a recent project to create geospatial tools for historians.

Jamie McLauglin from the University of Sheffield will talk about Locating Londons Past, a website which allows you to search digital resources on early modern and eighteenth-century London, and to map the results.

Stuart Macdonald of University of Edinburgh will talk about AddressingHistory, a website and crowdsourcing project to geospatially reference historical post office directory data.

Sam Griffiths of University College London, will talk about “Classifying historical business directory data: issues of translation between geographical and ethnographic contributions to a community PPGIS (Public Participation GIS) project�.

Kate Jones of the University of Portsmouth will talk about Stepping Into Time, a project to bring World War Two bomb damage maps into the real world by using web and mobile mapping technology.

We will also be welcoming Rebekkah Abraham and Michael Daley from We Are What We Do to talk about HistoryPin, a website and mobile app which enables you to browse and add historical images to a map of the world, exploring the past through georeferenced photographs.

The detailed programme for the day can be found on our Eventbrite page where you can also book your free place at this event. Bookings close on Friday 2nd March 2012 so book soon!

We will also be live blogging, tweeting and recording this event so do also keep an eye on the blog here, the #geocult hashtag, and on our Geospatial in the Cultural Heritage Domain – Past, Present & Future page where you will be able to access materials after the event.

Share/Bookmark

Upcoming Event: “Geospatialâ€� in the Cultural Heritage domain, past, present and future

We are very excited to announce that bookings are now open for the next JISC GECO workshop!

“Geospatial” in the  Cultural Heritage domain, past, present and future (#geocult) , taking place on Wednesday 7th March 2012 in London,  will be an opportunity to explore how digitised cultural heritage content can be exploited through geographical approaches and the types of tools and techniques that can be used with geo-referenced/geotagged content.

Issues we are keen to discuss include selection of maps/materials, issues of accuracy and precision, staff and technical requirements, sustainability, licensing.

The event will take place at Maughan Library, Chancery Lane, part of Kings College London. We are most grateful to the lovely people at the KCL Centre for e-Research for securing us this super location.

Library Entrance by Flickr User maccath / Katy Ereira

Library Entrance by Flickr User maccath / Katy Ereira

We are currently confirming the last few speakers and titles for talks so will post something here on the blog once the programme is finalised.

We already have a great draft schedule and some fantastic speakers confirmed so this promises to be a fascinating and stimulating day of talks and breakout sessions.

As we are sharing details of this event at pretty short notice we would be particularly grateful if you could book your place as soon as possible and please do tell your colleagues and friends who may be interested!

Book your free place now via our Eventbrite page:  http://geocult.eventbrite.com/

If you would like to propose any additional talks or ask any questions about the event please email the JISC GECO team via:  edina@ed.ac.uk.


Share/Bookmark