BL Labs Roadshow 2016

1330  Introduction
Dr Beatrice Alex, Research Fellow at the School of Informatics, University of Edinburgh

1335 Doing digital research at the British Library Nora McGregor, Digital Curator at the British Library

The Digital Research Team is a cross-disciplinary mix of curators, researchers, librarians and programmers supporting the creation and innovative use of British Library’s digital collections. In this talk Nora will highlight how we work with those operating at the intersection of academic research, cultural heritage and technology to support new ways of exploring and accessing our collections through; getting content in digital form and online; collaborative projects; offering digital research support and guidance.

1405  British Library Labs
Mahendra Mahey, Project Manager of British Library Labs.

The British Library Labs project supports and inspires scholars to use the British Library’s incredible digital collections in exciting and innovative ways for their research, through various activities such as competitions, awards, events and projects.

Labs will highlight some of the work that they and others are doing around digital content in libraries and also talk about ways to encourage researchers to engage with the British Library. They will present information on the annual BL Labs Competition, which closes this year on 11th April 2016. Through the Competition, Labs encourages researchers to submit their important research question or creative idea which uses the British Library’s digital content and data. Two Competition winners then work in residence at the British Library for five months and then showcase the results of their work at the annual Labs Symposium in November 2016.

Labs will also discuss the annual BL Labs Awards which recognises outstanding work already completed, that has used the British Library’s digital collections and data. This year, the Awards will commend work in four key areas: Research, Artistic, Commercial and Teaching / Learning. The deadline for entering the BL Labs Awards this year is 5th September 2016.

1420  Overview projects that have used British Library’s Digital Content and data.
Ben O’Steen, Technical Lead of British Library Labs.

Labs will further present information on various projects such as the ‘Mechanical Curator’ and other interesting experiments using the British Library’s digital content and data.

1500 Coffee and networking

1530 BL Labs Awards: Research runner up project: “Palimpsest: Telling Edinburgh’s Stories with Maps�
Professor James Loxley, Palimpsest, University of Edinburgh

Palimpsest seeks to find new ways to present and explore Edinburgh’s literary cityscape, through interfaces showcasing extracts from a wide range of celebrated and lesser known narrative texts set in the city. In this talk, James will set out some of the project’s challenges, and some of the possibilities for the use of cultural data that it has helped to unearth.

1600 Geoparsing Historical Texts data
Dr Claire Grover, Senior Research Fellow, School of Informatics, University of Edinburgh

Claire will talk about work the Edinburgh Language Technology Group have been doing for Jisc on geoparsing historical texts such as the British Library’s Nineteenth Century Books and Early English Books Online Text Creation Partnership which is creating standardized, accurate XML/SGML encoded electronic text editions of early print books.

1630 Finish

Feedback for the event
Please complete the following feedback form.

Share/Bookmark

eLearning@ed/LTW Monthly Showcase #2: Open

Today we have our second eLearning@ed/LTW Showcase and Network event. I’m liveblogging so, as usual, corrections and updates are welcome. 
Jo Spiller is welcoming us along and introducing our first speaker…
Dr. Chris Harlow – “Using WordPress and Wikipedia in Undergraduate Medical & Honours Teaching: Creating outward facing OERs”
I’m just going to briefly tell you about some novel ways of teaching medical students and undergraduate biomedical students using WordPress and platforms like Wikipedia. So I will be talking about our use of WordPress websites in the MBChB curriculum. Then I’ll tell you about how we’ve used the same model in Reproductive Biology Honours. And then how we are using Wikipedia in Reproductive Biology courses.
We use WordPress websites in the MBChB curriculum during Year 2 student selected components. Students work in groups of 6 to 9 with a facilitator. They work with a provided WordPress template – the idea being that the focus is on the content rather than the look and feel. In the first semester the topics are chosen by the group’s facilitator. In semestor two the topics and facilitators are selected by the students.
So, looking at example websites you can see that the students have created rich websites, with content, appendices. It’s all produced online, marked online and assessed online. And once that has happened the sites are made available on the web as open educational resources that anyone can explore and use here: http://studentblogs.med.ed.ac.uk/
The students don’t have any problem at all building these websites and they create these wonderful resources that others can use.
In terms of assessing these resources there is a 50% group mark on the website by an independent marker, a 25% group mark on the website from a facilitator, and (at the students request) a 25% individual mark on student performance and contribution which is also given by the facilitator.
In terms of how we have used this model with Reproductive Biology Honours it is a similar idea. We have 4-6 students per group. This work counts for 30% of their Semester 1 course “Reproductive Systems” marks, and assessment is along the same lines as the MBChB. Again, we can view examples here (e.g. “The Quest for Artificial Gametes”. Worth noting that there is a maximum word count of 6000 words (excluding Appendices).
So, now onto the Wikipedia idea. This was something which Mark Wetton encouraged me to do. Students are often told not to use or rely on Wikipedia but, speaking a biomedical scientist, I use it all the time. You have to use it judiciously but it can be an invaluable tool for engaging with unfamiliar terminology or concepts.
The context for the Wikipedia work is that we have 29 Reproductive Biology Honours stduents (50% Biomedical Sciences, 50% intercalculating medics), and they are split into groups of 4-5 students/groups. We did this in Semester 1, week 1, as part of the core “Research Skills in Reproductive Biology”. And we benefited from expert staff including two Wikipedians in Residence (at different Scottish organisations), a librarian, and a learning, teaching and web colleague.
So the students had an introdution to Wikipedia, then some literature searching examples. We went on to groupwprl sesssions to find papers on particular topics, looking for differences in definitions, spellings, terminology. We discussed findings. This led onto groupwork where each group defined their own aspect to research. And from there they looked to create Wikipedia edits/pages.
The groups really valued trying out different library resources and search engines, and seeing the varying content that was returned by them.
The students then, in the following week, developed their Wikipedia editing skills so that they could combine their work into a new page for Neuroangiogenesis. Getting that online in an afternoon was increadibly exciting. And actually that page was high in the search rankings immediately. Looking at the traffic statistics that page seemed to be getting 3 hits per day – a lot more reads than the papers I’ve published!
So, we will run the exercise again with our new students. I’ve already identified some terms which are not already out there on Wikipedia. This time we’ll be looking to add to or improve High Grade Serious Carcinoma, and Fetal Programming. But we have further terms that need more work.
Q&A
Q1) Did anyone edit the page after the students were finished?
A1) A number of small corrections and one querying of whether a PhD thesis was a suitable reference – whether a primary or secondary reference. What needs done more than anything else is building more links into that page from other pages.
Q2) With the WordPress blogs you presumably want some QA as these are becoming OERs. What would happen if a project got, say, a low C.
A2) Happily that hasn’t happened yet. That would be down to the tutor I think… But I think people would be quite forgiving of undergraduate work, which it is clearly presented at.
Q3) Did you consider peer marking?
A3) An interesting question. Students are concerned that there are peers in their groups who do not contribute equally, or let peers carry them.
Comment) There is a tool called PeerAim where peer input weights the marks of students.
Q3) Do all of those blog projects have the same model? I’m sure I saw something on peer marking?
A3) There is peer feedback but not peer marking at present.
Dr. Anouk Lang – “Structuring Data in the Humanities Classroom: Mapping literary texts using open geodata”
I am a digital humanities scholar in the school of Languages and Linguistics. One of the courses I teach is digital humanities for literature, which is a lovely class and I’m going to talk about projects in that course.
The first MSc project the students looked at was to explore Robert Louis Stevenson’s The Dynamiter. Although we were mapping the texts but the key aim was to understand who wrote what part of the text.
So the reason we use mapping in this course is because these are brilliant analytical students but they are not used to working with structured data, and this is an opportunity to do this. So, using CartoDB – a brilliant tool that will draw data from Google Sheets – they needed to identify locations in the text but I also asked students to give texts an “emotion rating”. That is a rating of intensity of emotion based on the work of Ian Gregory – spatial historian who has worked with Lakes data on the emotional intensity of these texts.
So, the students build this database by hand. And then loaded into CartoDB you get all sorts of nice ways to visualise the data. So, looking at a map of London you can see where the story occurs. The Dynamiter is a very weird text with a central story in London but side stories about the planting of bombs, which is kind of played as comedy. The view I’m showing here is a heatmap. So for this text you can see the scope of the text. Robert Louis Stevenson was British, but his wife was American, and you see that this book brings in American references, including unexpected places like Utah.
So, within CartoDB you can try different ways to display your data. You can view a “Torque Map” that shows chronology of mentions – for this text, which is a short story, that isn’t the most helpful perhaps.
Now we do get issues of anachronisms. OpenStreetMap – on which CartoDB is based – is a contemporary map and the geography and locations on the map changes over time. And so another open data source was hugely useful in this project. Over at the National Library of Scotland there is a wonderful maps librarian called Chris Fleet who has made huge numbers of historical maps available not only as scanned images but as map tiles through a Historical Open Maps API, so you can zoom into detailed historical maps. That means that mapping a text from, say, the late 19th Century, it’s incredibly useful to view a contemporaneous map with the text.
You can view the Robert Louis Stevenson map here: http://edin.ac/20ooW0s.
So, moving to this year’s project… We have been looking at Jean Rhys. Rhys was a white Creole born in the Dominican Republic who lived mainly in Europe. She is a really located author with place important to her work. For this project, rather than hand coding texts, I used the wonderful wonderful Edinburgh Geoparser (https://www.ltg.ed.ac.uk/software/geoparser/??) – a tool I recommend and a new version is imminent from Clare Grover and colleagues in LTG, Informatics.
So, the Geoparser goes through the text and picks out text that looks like places, then tells you which it things is the most likely location for that place – based on aspects like nearby words in the text etc. That produces XML and Clare has created me an XSLT Stylesheet, so all the students have had to do is to manually clean up that data. The GeoParser gives you GeoNames reference that enables you to check latitude and longitude. Now this sort of data cleaning, the concept of gazeteers, these are bread and butter tools of the digital humanities. These are tools which are very unfamiliar to many of us working in the humanities. This is open, shared, and the opposite of the scholar secretly working in the librarian.
We do websites in class to benefit from that publicness – and the meaning of public scholarship. When students are doing work in public they really rise to the challenge. They know it will connect to their real world identities. I insist students sow their name, their information, their image because this is part of their digital scholarly identities. I want people who Google them to find this lovely site with it’s scholarship.
So, for our Jean Rhys work I will show you a mock up preview of our data. One of the great things about visualising your data in these ways is that you can spot errors in your data. So, for instance, checking a point in Canada we see that the Geoparser has picked Halifax Nova Scotia when the text indicates Halifax in England. When I raised this issue in class today the student got a wee bit embarrassed and made immediate changes… Which again is kind of perk of work in public.
Next week my students will be trying out QGIS  with Tom Armitage of EDINA, that’s a full on GIS system so that will be really exciting.
For me there are real pedagogical benefits of these tools. Students have to really think hard about structuring their data, which is really important. As humanists we have to put our data in our work into computational form. Taking this kind of class means they are more questioning of data, of what it means, of what accuracy is. They are critically engaged with data and they are prepared to collaborate in a gentle kind of way. They also get to think about place in a literary sense, in a way they haven’t before.
We like to think that we have it all figured out in terms of understanding place in literature. But when you put a text into a spreadsheet you really have to understand what is being said about place in a whole different way than a close reading. So, if you take a sentence like: “He found them a hotel in Rue Lamartine, near Gard du Nord, in Monmatre”. Is that one location or three? The Edinburgh GeoParser maps two points but not Rue Lamartine… So you have to use Google maps for that… And is the accuracy correct. And you have to discuss if those two map points are distorting. The discussion there is more rich than any other discussion you would have around close reading. We are so confident about close readings… We assume it as a research method… This is a different way to close read… To shoe horn into a different structure.
So, I really like Michel De Certeau’s “Spatial stories” in The practice of everyday life (De Certeau 1984), where he talks about structured space and the ambiguous realities of use and engagement in that space. And that’s what that Rue LaMartine type example is all about.
Q&A
Q1) What about looking at distance between points, how length of discussion varies in comparison to real distance
A1) That’s an interesting thing. And that CartoDB Torque display is crude but exciting to me – a great way to explore that sort of question.
OER as Assessment – Stuart Nichol, LTW
I’m going to be talking about OER as Assessment from a students perspective. I study part time on the MSc in Digital Education and a few years ago I took a module called Digital Futures for Learning, a course co-created by participants and where assessment is built around developing an Open Educational Resource. The purpose is to “facilitate learning for the whole group”. This requires a pedagogical approach (to running the module) which is quite structured to enable that flexibility.
So, for this course, the assessment structure is 30% position paper (basis of content for the OER), then 40% of mark for the OER (30%peer-assessed and tutor moderated / 10% self assessed), and then the final 30% of the marks come from an analysis paper that reflects on the peer assessment. You could then resubmit the OER along with that paper reflecting on that process.
I took this module a few years ago, before the University’s adoption of an open educational resource policy, but I was really interested in this. So I ended up building a course on Open Accrediation, and Open Badges, using weebly: http://openaccreditation.weebly.com/.
This was really useful as a route to learn about Open Educational Resources generally but that artefact has also become part of my professional portfolio now. It’s a really different type of assignment and experience. And, looking at my stats from this site I can see it is still in use, still getting hits. And Hamish (Macleod) points to that course in his Game Based Learning module now. My contact information is on that site and I get tweets and feedback about the resource which is great. It is such a different experience to the traditional essay type idea. And, as a learning technologist, this was quite an authentic experience. The course structure and process felt like professional practice.
This type of process, and use of open assessment, is in use elsewhere. In Geosciences there are undergraduate students working with local schools and preparing open educational resources around that. There are other courses too. We support that with advice on copyright and licensing. There are also real opportunities for this in the SLICCs (Student Led Individually Created Courses). If you are considering going down this route then there is support at the University from the IS OER Service – we have a workshop at KB on 3rd March. We also have the new Open.Ed website, about Open Educational Resources which has information on workshops, guidance, and showcases of University work as well as blogs from practitioners. And we now have an approved OER policy for learning and teaching.
In that new OER Policy and how that relates to assessment, and we are clear that OERs are created by both staff and students.
And finally, fresh from the ILW Editathon this week, Ewan MacAndrew, our new Wikimedian in residence, will introduce us to Histropedia (Interactive timelines for Wikipedia: http://histropedia.com) and run through a practical introduction to Wikipedia editing.

Share/Bookmark

Digital humanities: What does it mean? LiveBlog

Today I am at the Digital Humanities: What does it mean? session at Teviot debating Hall. I will be running two workshops later but will LiveBlog others talks taking place today.

We are starting with an introduction from Jessica from Forum, who is explaining the background to today’s event, in exploring what digital humanities are and what it means to be a digital only journal.

The first speaker today is Lisa Otty

Lisa Otty – Digital Humanities or How I Learned to stop worrying and love the computer

I’m going to take “digital humanities, what does it mean?” In two ways. Firstly thinking about literal definitions, but also thinking more rhetorically about what this means.

Digital humanities generate many strong opinions and anxieties – hence my title borrowed from Dr strange love. So I want to move beyond the polemic to what digital humanities actually means to practitioners.

I want to ask you about the technologies you use… From word processing to Google books, to blogs, twitter, to Python and raspberry pis (by show of hands most use the former, two code, one uses a raspberry pi to build). There is a full spectrum here.

Wikipedia is probably the most widely used encyclopedia but I suspect most academics would still be sceptical about it… Can we trust crowdsourced information? Well it’s definition of digital humanities is really useful. What we should particularly take from this definition that it is a methodology, computational methods. Like critical theory it cross cuts different disciplines, which is why to slot into universities structures.

Chris Forster, on the HASTAC blog (9/8/2010), talks about digital humanities as about direct practical use of computational methods for research, of media studies new media, using technology in the classroom, and the way new technology is rescaling research and the profession – academic publishing, social media, and alt-ac (those academic-like but from outside traditional structures, eg based in support services).

So I’ve recrafted this a but. Digital humanities is about:

Research that uses computational methods and tools. Probably the most famous proponent of this is Franco Morello, who uses quantitative computational methods in his area of literature. This is work at large scale – often called scalable reading or distance reading. So for instance looking at British novelistic genres 1740-1900 he has created a visual representation of how these genres appear and disappear – frequently in clusters. Moretti says that this maps out the expectations of genres over time.

Similarly Moretti has visualised the characters in Hamlet and their deaths, mapping out that characters closely related to the king and closely related to polonium then you are toast. Now you could find that out by reading Hamlet, but with that approach you can go and explore other texts.

Research that studies digital objects/cultural. Lev Marovich has founded the concept of cultural analytics. For instance a recent project looks at people’s self portraits online, how they present themselves, how they describe themselves. They found women take more selfies than men, women take them in their early twenties, men in their thirties, and people I’m susan Paulo like to recline in their selfies – not sure what that part tells us!

Research that builds digital objects/tools. For instance the Carnegie Mellon Docuscope which looks for linguistic markers and rhetorical patterns. Interestingly colleagues at strathclyde using this tool found that structurally Othello is a comedy.

So you may be building tools for your discipline or area of research we also see tools built around digitised texts, such as Codex Simaiticus. This has been digitised using a process which photographs the texts in many didn’t light levels and conditions, including ultra violet light. This allows scholars to work with texts in new ways, to read previously inaccessible or delicate texts. And there are 3d imaging techniques too. So digital images have really important implications for humanities scholars, particularly in areas such as archeology.

This computation research fits into four key fields:
– digitisation and TEI, the latter a metadata mark up language which is really scholarly best practice to use. Whole projects are based around setting up details in TEI.
– mapping and data visualisation – like Moretti, georeferencing etc.
– text mining/topic modelling
– physical computing – a catch all for digital imaging and similar technologies

I wanted to now focus on some projects with a close association with this university.

– digitisation and TEI – the Modernist Versions project
– mapping and data visualisation – PLEIDES, extracted georeferenced texts from ancient classical texts
– text mining – Palimpsest uses text mining to georeferences references to places in texts to allow exploration in situ using mobile phones.
– physical computing – digital imaging unit at edinburgh university library is brilliant, has a fantastic blog, a rich resource.

So to the rhetorical aspects of DH.

Roberto Busa (1949-2005) undertook a visionary project with IBM, the Index Thomisticus. He was really the first person to connect text to the internet. The world of 2005 when that project went live was very different to 1949.

The term Digital humanities was coined in 2001. Computing was already about teaching, publishing, convergent practices… The definition of DH which relates the field to to a three ring circus really connects to Chris foresters definition.

By 2009 we reached a pivotal moment for digital humanities! it moved from emergent to established (Christine ?, UCLA). Some enthusiasts saw this as the future. But it generated a kind of equal and opposite reaction… Not everyone wants borders reshaped and challenged, they were already invested in their own methods. New methods can be daunting. What seemed most worrying was what digital humanities might bring with it. Anxieties arose from very real concerns…

There has been an encroachment of science and the precariousness of the humanities with medical humanities, cognitive humanities, neuro humanities, digital humanities. Here the rhetoric sees scientific methods as more valid than humanities. People like frank morello don’t help here. And to what extent do we use these scientific approaches to validate humanities work? I don’t think the humanities would be any less precarious if all used such approaches.

And there are managerial and financial issues, Daniel Allington, himself a digital humanities scholars. He describes humanities research as cheap, disadvantagious from two perspectives, both funders and universities. Sometimes theses projects can be about impact or trendiness, not always about the research itself. matthew tanbaum(?) describes it more tactfully, with DH as “tactical coinage”, acknowledging the reality of circumstances in which DH allows us to get things done, to put it simply.

And who is in DH? Generally it is a very gendered and a very white group. Typically teenage boys are the people who teach themselves to code. The terms can be inaccessible. It can be ageist.it can seem to enforce privilegde. There are groups that are seeking to change this, but we have to be aware of the implications.

And those tools I showed before… Those are mainly commercial companies, as we all know if you do not pay for a service, you are the product, even the British newspaper archive is about digitising in order to charge via genealogy websites. DH has a really different relationship to business, to digital infrastructure. I want to tell you about this to explain the polemical responses to DH. And so that you understand the social, cultural and professional implications.

Geoffrey Harpham, in NEH bulletin (winter 2014) talk about research as being about knowledge but also the processes by which it is brought into being. We are all using digital tools. We just have to be conscious of what we are doing, what we are priviledging, what we are excluding. digital humanities scholars have put this well in a recent MIT publication. They point to questions raised:
– what haloens when anyone can speak and publish? What happens when knowledge credential in is no longer controlled solemnly by institutions of higher learning?
– who can create knowledge?

I liken this time to the building of great libraries in the nineteenth century. We have to be involved and we really have to think about what it means to become digital. We need to shape this space in critical ways, shaping the tolls we need.

Matthew Kirshenbaum talks about digital humanities as mobile and tactical signifier. He talks about the field as a network topology. DH, the keyword, the tag, constantly changes, is constantly redefined.

And in a way this is why Wikipedia is the perfect place to seek a definition, it is flexible and dynamic.

Digital Humanities has to also be flexible, it is up to all of us to make it what we want it to be.

Q&A

Q1) is this an attempt for humanities to redefine itself to survive?
A1) it’s an important areas. The digital humanist does work collaboratively with the sciences. The wrong approach is to be staking out you space and defending it, collaborative work is tactical. So many post phd roles are temporary contracts around projects. We can’t just maintain the status quo, but we. Do have to think strategically about what we do, and be critical in thinking about what that means.

Q2) coming back to your Wikipedia comment, and the reinforcement of traditional privilege… I’ve become increasingly aware that Wikipedia can also be replicating traditional structures. Wikipedian in residence legitimises Wikipedia, but does it not also potentially threaten the radical nature of the space?
A2) you’ve put your finger on the problem, I think we are all aware of the gender bias in Wikipedia. And those radical possibilities, and threats are important to stay on top of, and that includes understanding what takes place behind the scenes, in order to understand what that means.

Q3) I wanted to ask about the separate nature of some of those big digital humanities centre
A3) in the USA there are some huge specialist centres at UCLS, university of Victoria, Stanford, create hugely specialist tools which are freely available but which attract projects and expertise to their organisation. In a way the lack of big centres here does make us think more consciously about what digital humanities is. I was speaking to Andrew Prescott about this recently and he thinks the big DH centres in the UK will disappear and that it will be dispersed across humanities departments. But it’s all highly political and we. Have to be aware of the politics of these tools and organisations when we Use and engage with them.

Q4) given we all have to put food on the table, how can we work with what is out there already – the Googles of the world who do hire humanities experts for instance.
A4) I didn’t mean to suggest google is bad, they are good in many ways. But DH as a tactical term is something that you can use for your benefit. It is a way to get into a job! That’s perfectly legitimate. There are very positive aspects to the term in terms of deployment and opportunities.

Q5) how do you get started with DH?
A5) a lot of people teach themselves… There are lots of resources and how too guides online. There is Stanford’s “tooling up for the digital humanities”, Roy rosewhite centre has DH tools. Or for your data you can use things like Voyant Tolls. Lots of eresoures online. Experiment. And follow DH people on twitter. Start reading blogs, read tutorials of how to do things. Watch and learn!

Q6) are there any things coming up you can. Recommend?
A6) yes, we have an event coming up on 9th June. Informations coming soon. You can sign up for that to see presentations, speak to scholars about DH, and there will be a bidding process for a small amount of money to. Try these tools. And there is also a DH network being established by institutions across Scotland so look out for news on that soon!

And with that I ran two workshops…

Panel Session

We have Jo Shaw chairing, Ally Crockford! Anna Groundwater, James Loxley! Louise Settle, Greg Walter

Greg
My project is not very digital, and largely Inhumane! I think I’m here to show you what not to do! My project is theatrical, the only 16th century play form Scotland to survive. It had never been performed since 1554. We kind of showed why that was! It is 5 and can half hours long… We got a director, actors, etc. funding to do this, and why is so hard to do financially. So we set up a website, Staging and Representing the Scottish Renaissance Court, with HD video that can be edited and manipulated. Endless blogging, twittering, and loads fore sources for teachers etc. and we have local dramatic groups who are taking the play up. The Linlithgow town Players are performing it all next year for instance

Ally

This is incomplete but my project is called digital manipulation a, grew out of AHRC project with surgeons hall in edinburgh. The city is first UNESCO city of literature but medically it is also historically one of the most important cities in the world. So makes some sense to look at those two factors together. So my site, a mock up, is Dissecting Edinburgh. A digital project, based on omeka, designed for non IT specialists but it’s still pretty tough to use actually. They have plugins and extensions. Bit like wordpress but more designed for academic curation. For instance have an extension that has been used to map literary connections between real locations and HP Lovecrafts work. And you can link sources to comment back to full text. And you can design “exhibitions” based on keywords or themes. Looking for similarities in sources, etc.mthat is the hope of what it will look like… Hopefully!

Louise
My IASH project uses historical GIS to map crime from 1900 to 1939. Looking at women’s experiences, and looking at policing. Geography became important which is how I came to use GIS. I used edinburgh Map Builder… Although if you aren’t looking just at Edinburgh you can use Digimap which has full UK coverage. I wasn’t technically minded but I came to use these tools because of my research. So I got my data from court records and archives… And out that into GIS, plot them on the map, see what changes and patterns occur. Changes appear… And suggest new questions… Like plotting entertainment venues etc. and I’ve used that in papers, online etc. I’m also working with MESH: mapping edinburghs social history which is a huge project looking at living, dying, making, feeding, drinking… Huge scale project on Edinburgh.

James
This is a blog site plus I suppose. This was a project Anna and I were working on from 2011-2013 based on a very long walk that Ben Jonson took. I was lucky enough to turn up a manuscript by his travelling companion. I was exploring a text, annotating it, summarising it, and creating a book… But Anna had other ideas and we found new digital tools to draw out elements of the account… Despite being about a writer and a poet it’s much more a documentary account of the journey itself. So within the blog we were able to create a map of the walk itself…. With each point a place that Jonson and his companion visited. This was all manually created with google maps. It was fun but time consuming. Then created a database used for this map. And then there markers for horse or coaches. We worked with Dave in our college we team to help with this who was great at bringing this stuff together. For each place you could find the place, the dates visited, distance form last point, details of food of drink etc. sort of tabulated the walk… And that plays to the strengths of the texts. And we could calculate Jonsons preferred walking speed… Which seemed to be thresh miles per hour – seems unlikely as he was in his forties and 20stone according to other accounts at the time…

Anyway in addition we used the blog to track the walk, each going live relative to the point that Jonson and his companion had reached. And the points on the map appeared to the same schedule – gave people a reason to go back and revisit…

The most fun was the other bit…

Anna

I’m going to talk a bit about how we did that I real time. We want edit o be creative… Because we didn’t want to do the walk! And so ewe tweeted in real time, using modernised version (and spelling) of the text in the voice of the travelling companions,and. Chunked up into the appropriate portions of the day. It felt more convincing and authentic because it was so fixed and authentic in terms of timing. (See @benjonsonswalk). We did it on trace book as well. And tweets showed on the blog so you could follow from tweet to blog… It unfolded in real time and always linked back to more detail about Ben Jonsons walk on the blog.

Now… It was an add on to the project. Not in original AHRC blog. Just built it in. It was 788 tweets. It was unbelievably time consuming! We preloaded the tweets on Hootsuite. So preloaded but we could then interact as needed. Took a month to set up. And once up and running you have to maintain it. Between us we did that. But it was 24/7. You have to reply, you have to thank them for following. We got over 1200 followers engaging. Fun bit was adding photos to tweets and blog of, say, buildings from that time that still stand. What I wasn’t expecting was what we got back from the public… People tweeted or commented with information that we didn’t know… And that made it into the book and is acknowledged. It was real Knowledge Exchange in practice!

James: the twitter factor got us major media interest from all the major newspapers, radio etc. madden. Big impact.

Anna: Although more and more projects will be doing these things, we did have a novelty factor.

Jo: what was the best thing and the worst thing about what happens?

Greg: best thing wasn’t digital, it was working with ac tors. Learned so much working together. Worst thing was… Never work with trained ravens!

Ally: best thing is that I’m quite a nerd so I love finding little links and. Connections… I found out that Robert Louis Stevenson was friends with James Demoson (?) daughter, he had discovered cloroform… Lovely comments in her texts about Stevenson, as a child watching her father at work from out of his window. Worst thing is that I’m a stickler and a nerd, ow ant to start from scratch and learn everything and how it works…. The timeload is huge.

Louise: best thing was that I didn’t know I was interested in maps before, so that’s been brilliant. Worst part was having to get up to speed with that and make data fit the right format…but using existing tools can be super time saving.

James: best thing was the enthusiasm of people out there, I’m a massive nerd and Ben Jonson fan… Seeing others interest was brilliant. Particularly when you got flare ups and interest as Ben Jonson went through their home town… Worst bit was being heckled by an incredibly rude William Shakespeare on twitter!

Anna: the other connection with shakespeare was that Jonson stayed at the george at Huntingdon. You have to hashtag everything so ewe hashtagged the place. We got there… The manager at The George write back to say that they stage a Shakespeare play every year in the courtyard. They didn’t know Jonson had stayed there… Love this posthumous meeting!

Q: what’s come across is how much you’ve learned and come to understand what you’ve been using. Wondered how that changed your thinking and perhaps future projects…

Anna: we were Luddites (nerdy geeky Luddites) but we learned so so much! A huge learning process. The best way to learn is by doing it. It’s the best way to learn those capabilities. You don’t have to do it all. Spot what you can, then go to the write person to help. As to the future… We were down in Yorkshire yesterday talking about a big digital platform across many universities working on Ben Jonson. Huge potential. Collaboration potential exciting. Possibly Europe wide, even US.

Ally: it can change the project… I looked at omeka… I wanted to use everything but you have to focus in on what you need to do… Be pragmatic, do what you can in the time, can build on it later…

Jo: you are working on your own, would co working work better?

Ally: would be better if cross pollinations cross multiple researchers working together. Initially I wanted to see what I can do, if I admin generate some interest. Started off with just me. Spoke to people at NLS, quite interested in directing digitisation in helpful ways. Now identifying others to work with… But I wanted to figure out what I can do as a starting point…

Louise: MESH is quite good for that. They are approaching people to do just part of what’s needed… So plotting brothel locations and I’d already done that… But there were snippets of data to bring in. Working with a bigger team is really useful. Linda who was at IASH last year is doing a project in Sweden and working on those projects has given me confidence to potentially be part of that…

Greg: talking about big data for someone and they said the key thing is when you move from where the technology does what you can, and moves into raising new questions, bringing something new… So we are thinking about out how to make miracle play with some real looking miracles in virtual ways…

Jo: isn’t plotting your way through a form of big data…?

Greg: it’s visualising something we had in our head… Stage one is getting play better known. When we. Get to stage two we can get to hearing their responses to it too…

Anna: interactions and crowdsourcing coming into the research process, that’s where we are going… Building engagement into the project… Social media is very much part of the research process..there are some good English literature people doing stuff. Some of Lisa Otty’s work is amazing. I’m developing a digital literature course… I’ve been following Lisa, also Elliot Lang (?) at strathclyde… Us historians are maybe behind the crowd…

Ally: libraries typically one step ahead of academics in terms of integrating academic tools and resources in accessible formats. So the Duncan street caller lets you flick through floor plans of john murray archive. It’s stunning. It’s a place to want to get to…

James Loxley: working on some of these projects has led to my working on a project with colleagues from informatics, with St. Andrews and with edina to explore and understand how edinburghs cityscape has evolved through literature. Big data, visualisation… Partly be out finding non linear, non traditional ways into the data. This really came from understanding Ben Jonsons walk text in a different structure, as a non linear thing

Q: what would you have done differently

Louise: if I’d known how the data had to be cleaned and structured up front, I’d have done it that way to start with… Knowing how I’d use it.

Ally: I think it would have had a more realistic assessment of what I needed to do, and done more research about the work involved. Would have been good to spends. Few months to look at other opportunities, people working in similar ways rather than reinventing the wheel.

Greg: in a previous project we performed a play at Hampton court, our only choice. We chose to make the central character not funny… In a comedy… A huge mistake. Always try to be funny…

Anna: I don’t think we messed up too badly…

James: I’d have folder funding into the original bid…

Anna: we managed to get some funding for the web team as pilot project thankfully. But yes, build it in. Factor it in early. I think it should be integral, not an add on.

Q: you mentioned using databases…. What kinds have you used? Acid you mentioned storify… Wondered how you used? What is immersive environment for the drama?

Greg: I don’t think it exists yet.a. Discussion at Brunel between engineers, and developers and my collaborator…

Ally: I think there is a project looking in this area…

Louise: I used access for my database…

James: to curate map data we started in excel…. Then dave did magic to make it a google map. Storify was to archive those tweets, to have a store of that work basically…

Anna: there are courses out there. Take them. I went on digimap courses, ARCGIS, social media courses which were really helpful. Just really embrace this stuff. And things change so fast….

And with that we draw to a close with thank yous to our speakers….

!

Share/Bookmark