ReCon 2017 – Liveblog

Today I’m at ReCon 2017, giving a presentation later (flying the flag for the unconference sessions!) today but also looking forward to a day full of interesting presentations on publishing for early careers researchers.

I’ll be liveblogging (except for my session) and, as usual, comments, additions, corrections, etc. are welcomed. 

Jo Young, Director of the Scientific Editing Company, is introducing the day and thanking the various ReCon sponsors. She notes: ReCon started about five years ago (with a slightly different name). We’ve had really successful events – and you can explore them all online. We have had a really stellar list of speakers over the years! And on that note…

Graham Steel: We wanted to cover publishing at all stages, from preparing for publication, submission, journals, open journals, metrics, alt metrics, etc. So our first speakers are really from the mid point in that process.

SESSION ONE: Publishing’s future: Disruption and Evolution within the Industry

100% Open Access by 2020 or disrupting the present scholarly comms landscape: you can’t have both? A mid-way update – Pablo De Castro, Open Access Advocacy Librarian, University of Strathclyde

It is an honour to be at this well attended event today. Thank you for the invitation. It’s a long title but I will be talking about how are things are progressing towards this goal of full open access by 2020, and to what extent institutions, funders, etc. are being able to introduce disruption into the industry…

So, a quick introduction to me. I am currently at the University of Strathclyde library, having joined in January. It’s quite an old university (founded 1796) and a medium size university. Previous to that I was working at the Hague working on the EC FP7 Post-Grant Open Access Pilot (Open Aire) providing funding to cover OA publishing fees for publications arising from completed FP7 projects. Maybe not the most popular topic in the UK right now but… The main point of explaining my context is that this EU work was more of a funders perspective, and now I’m able to compare that to more of an institutional perspective. As a result o of this pilot there was a report commissioned b a British consultant: “Towards a competitive and sustainable open access publishing market in Europe”.

One key element in this open access EU pilot was the OA policy guidelines which acted as key drivers, and made eligibility criteria very clear. Notable here: publications to hybrid journals would not be funded, only fully open access; and a cap of no more than €2000 for research articles, €6000 for monographs. That was an attempt to shape the costs and ensure accessibility of research publications.

So, now I’m back at the institutional open access coalface. Lots had changed in two years. And it’s great to be back in this spaces. It is allowing me to explore ways to better align institutional and funder positions on open access.

So, why open access? Well in part this is about more exposure for your work, higher citation rates, compliant with grant rules. But also it’s about use and reuse including researchers in developing countries, practitioners who can apply your work, policy makers, and the public and tax payers can access your work. In terms of the wider open access picture in Europe, there was a meeting in Brussels last May where European leaders call for immediate open access to all scientific papers by 2020. It’s not easy to achieve that but it does provide a major driver… However, across these countries we have EU member states with different levels of open access. The UK, Netherlands, Sweden and others prefer “gold” access, whilst Belgium, Cyprus, Denmark, Greece, etc. prefer “green” access, partly because the cost of gold open access is prohibitive.

Funders policies are a really significant driver towards open access. Funders including Arthritis Research UK, Bloodwise, Cancer Research UK, Breast Cancer Now, British Heard Foundation, Parkinsons UK, Wellcome Trust, Research Councils UK, HEFCE, European Commission, etc. Most support green and gold, and will pay APCs (Article Processing Charges) but it’s fair to say that early career researchers are not always at the front of the queue for getting those paid. HEFCE in particular have a green open access policy, requiring research outputs from any part of the university to be made open access, you will not be eligible for the REF (Research Excellence Framework) and, as a result, compliance levels are high – probably top of Europe at the moment. The European Commission supports green and gold open access, but typically green as this is more affordable.

So, there is a need for quick progress at the same time as ongoing pressure on library budgets – we pay both for subscriptions and for APCs. Offsetting agreements are one way to do this, discounting subscriptions by APC charges, could be a good solutions. There are pros and cons here. In principal it will allow quicker progress towards OA goals, but it will disproportionately benefit legacy publishers. It brings publishers into APC reporting – right now sometimes invisible to the library as paid by researchers, so this is a shift and a challenge. It’s supposed to be a temporary stage towards full open access. And it’s a very expensive intermediate stage: not every country can or will afford it.

So how can disruption happen? Well one way to deal with this would be the policies – suggesting not to fund hybrid journals (as done in OpenAire). And disruption is happening (legal or otherwise) as we can see in Sci-Hub usage which are from all around the world, not just developing countries. Legal routes are possible in licensing negotiations. In Germany there is a Projekt Deal being negotiated. And this follows similar negotiations by open access.nl. At the moment Elsevier is the only publisher not willing to include open access journals.

In terms of tools… The EU has just announced plans to launch it’s own platform for funded research to be published. And Wellcome Trust already has a space like this.

So, some conclusions… Open access is unstoppable now, but still needs to generate sustainable and competitive implementation mechanisms. But it is getting more complex and difficult to disseminate to research – that’s a serious risk. Open Access will happen via a combination of strategies and routes – internal fights just aren’t useful (e.g. green vs gold). The temporary stage towards full open access needs to benefit library budgets sooner rather than later. And the power here really lies with researchers, which OA advocates aren’t always able to get informed. It is important that you know which are open and which are hybrid journals, and why that matters. And we need to think if informing authors on where it would make economic sense to publish beyond the remit of institutional libraries?

To finish, some recommended reading:

  • “Early Career Researchers: the Harbingers of Change” – Final report from Ciber, August 2016
  • “My Top 9 Reasons to Publish Open Access” – a great set of slides.

Q&A

Q1) It was interesting to hear about offsetting. Are those agreements one-off? continuous? renewed?

A1) At the moment they are one-off and intended to be a temporary measure. But they will probably mostly get renewed… National governments and consortia want to understand how useful they are, how they work.

Q2) Can you explain green open access and gold open access and the difference?

A2) In Gold Open Access, the author pays to make your paper open on the journal website. If that’s a hybrid – so subscription – journal you essentially pay twice, once to subscribe, once to make open. Green Open Access means that your article goes into your repository (after any embargo), into the world wide repository landscape (see: https://www.jisc.ac.uk/guides/an-introduction-to-open-access).

Q3) As much as I agree that choices of where to publish are for researchers, but there are other factors. The REF pressures you to publish in particular ways. Where can you find more on the relationships between different types of open access and impact? I think that can help?

A3) Quite a number of studies. For instance is APC related to Impact factor – several studies there. In terms of REF, funders like Wellcome are desperate to move away from the impact factor. It is hard but evolving.

Inputs, Outputs and emergent properties: The new Scientometrics – Phill Jones, Director of Publishing Innovation, Digital Science

Scientometrics is essentially the study of science metrics and evaluation of these. As Graham mentioned in his introduction, there is a whole complicated lifecycle and process of publishing. And what I will talk about spans that whole process.

But, to start, a bit about me and Digital Science. We were founded in 2011 and we are wholly owned by Holtzbrink Publishing Group, they owned Nature group. Being privately funded we are able to invest in innovation by researchers, for researchers, trying to create change from the ground up. Things like labguru – a lab notebook (like rspace); Altmetric; Figshare; readcube; Peerwith; transcriptic – IoT company, etc.

So, I’m going to introduce a concept: The Evaluation Gap. This is the difference between the metrics and indicators currently or traditionally available, and the information that those evaluating your research might actually want to know? Funders might. Tenure panels – hiring and promotion panels. Universities – your institution, your office of research management. Government, funders, policy organisations, all want to achieve something with your research…

So, how do we close the evaluation gap? Introducing altmetrics. It adds to academic impact with other types of societal impact – policy documents, grey literature, mentions in blogs, peer review mentions, social media, etc. What else can you look at? Well you can look at grants being awarded… When you see a grant awarded for a new idea, then publishes… someone else picks up and publishers… That can take a long time so grants can tell us before publications. You can also look at patents – a measure of commercialisation and potential economic impact further down the link.

So you see an idea germinate in one place, work with collaborators at the institution, spreading out to researchers at other institutions, and gradually out into the big wide world… As that idea travels outward it gathers more metadata, more impact, more associated materials, ideas, etc.

And at Digital Science we have innovators working across that landscape, along that scholarly lifecycle… But there is no point having that much data if you can’t understand and analyse it. You have to classify that data first to do that… Historically we did that was done by subject area, but increasingly research is interdisciplinary, it crosses different fields. So single tags/subjects are not useful, you need a proper taxonomy to apply here. And there are various ways to do that. You need keywords and semantic modeling and you can choose to:

  1. Use an existing one if available, e.g. MeSH (Medical Subject Headings).
  2. Consult with subject matter experts (the traditional way to do this, could be editors, researchers, faculty, librarians who you’d just ask “what are the keywords that describe computational social science”).
  3. Text mining abstracts or full text article (using the content to create a list from your corpus with bag of words/frequency of words approaches, for instance, to help you cluster and find the ideas with a taxonomy emerging

Now, we are starting to take that text mining approach. But to use that data needs to be cleaned and curated to be of use. So we hand curated a list of institutions to go into GRID: Global Research Identifier Database, to understand organisations and their relationships. Once you have that all mapped you can look at Isni, CrossRef databases etc. And when you have that organisational information you can include georeferences to visualise where organisations are…

An example that we built for HEFCE was the Digital Science BrainScan. The UK has a dual funding model where there is both direct funding and block funding, with the latter awarded by HEFCE and it is distributed according to the most impactful research as understood by the REF. So, our BrainScan, we mapped research areas, connectors, etc. to visualise subject areas, their impact, and clusters of strong collaboration, to see where there are good opportunities for funding…

Similarly we visualised text mined impact statements across the whole corpus. Each impact is captured as a coloured dot. Clusters show similarity… Where things are far apart, there is less similarity. And that can highlight where there is a lot of work on, for instance, management of rivers and waterways… And these weren’t obvious as across disciplines…

Q&A

Q1) Who do you think benefits the most from this kind of information?

A1) In the consultancy we have clients across the spectrum. In the past we have mainly worked for funders and policy makers to track effectiveness. Increasingly we are talking to institutions wanting to understand strengths, to predict trends… And by publishers wanting to understand if journals should be split, consolidated, are there opportunities we are missing… Each can benefit enormously. And it makes the whole system more efficient.

Against capital – Stuart Lawson, Birkbeck University of London

So, my talk will be a bit different. The arguements I will be making are not in opposition to any of the other speakers here, but is about critically addressing our current ways we are working, and how publishing works. I have chosen to speak on this topic today as I think it is important to make visible the political positions that underly our assumptions and the systems we have in place today. There are calls to become more efficient but I disagree… Ownership and governance matter at least as much as the outcome.

I am an advocate for open access and I am currently undertaking a PhD looking at open access and how our discourse around this has been coopted by neoliberal capitalism. And I believe these issues aren’t technical but social and reflect inequalities in our society, and any company claiming to benefit society but operating as commercial companies should raise questions for us.

Neoliberalism is a political project to reshape all social relations to conform to the logic of capital (this is the only slide, apparently a written and referenced copy will be posted on Stuart’s blog). This system turns us all into capital, entrepreneurs of our selves – quantification, metricification whether through tuition fees that put a price on education, turn students into consumers selecting based on rational indicators of future income; or through pitting universities against each other rather than collaboratively. It isn’t just overtly commercial, but about applying ideas of the market in all elements of our work – high impact factor journals, metrics, etc. in the service of proving our worth. If we do need metrics, they should be open and nuanced, but if we only do metrics for people’s own careers and perform for careers and promotion, then these play into neoliberal ideas of control. I fully understand the pressure to live and do research without engaging and playing the game. It is easier to choose not to do this if you are in a position of privelege, and that reflects and maintains inequalities in our organisations.

Since power relations are often about labour and worth, this is inevitably part of work, and the value of labour. When we hear about disruption in the context of Uber, it is about disrupting rights of works, labour unions, it ignores the needs of the people who do the work, it is a neo-liberal idea. I would recommend seeing Audrey Watters’ recent presentation for University of Edinburgh on the “Uberisation of Education”.

The power of capital in scholarly publishing, and neoliberal values in our scholarly processes… When disruptors align with the political forces that need to be dismantled, I don’t see that as useful or properly disruptive. Open Access is a good thing in terms of open access. But there are two main strands of policy… Research Councils have spent over £80m to researchers to pay APCs. Publishing open access do not require payment of fees, there are OA journals who are funded other ways. But if you want the high end visible journals they are often hybrid journals and 80% of that RCUK has been on hybrid journals. So work is being made open access, but right now this money flows from public funds to a small group of publishers – who take a 30-40% profit – and that system was set up to continue benefitting publishers. You can share or publish to repositories… Those are free to deposit and use. The concern of OA policy is the connection to the REF, it constrains where you can publish and what they mean, and they must always be measured in this restricted structure. It can be seen as compliance rather than a progressive movement toward social justice. But open access is having a really positive impact on the accessibility of research.

If you are angry at Elsevier, then you should also be angry at Oxford University and Cambridge University, and others for their relationships to the power elite. Harvard made a loud statement about journal pricing… It sounded good, and they have a progressive open access policy… But it is also bullshit – they have huge amounts of money… There are huge inequalities here in academia and in relationship to publishing.

And I would recommend strongly reading some history on the inequalities, and the racism and capitalism that was inherent to the founding of higher education so that we can critically reflect on what type of system we really want to discover and share scholarly work. Things have evolved over time – somewhat inevitably – but we need to be more deliberative so that universities are more accountable in their work.

To end on a more positive note, technology is enabling all sorts of new and inexpensive ways to publish and share. But we don’t need to depend on venture capital. Collective and cooperative running of organisations in these spaces – such as the cooperative centres for research… There are small scale examples show the principles, and that this can work. Writing, reviewing and editing is already being done by the academic community, lets build governance and process models to continue that, to make it work, to ensure work is rewarded but that the driver isn’t commercial.

Q&A

Comment) That was awesome. A lot of us here will be to learn how to play the game. But the game sucks. I am a professor, I get to do a lot of fun things now, because I played the game… We need a way to have people able to do their work that way without that game. But we need something more specific than socialism… Libraries used to publish academic data… Lots of these metrics are there and useful… And I work with them… But I am conscious that we will be fucked by them. We need a way to react to that.

Redesigning Science for the Internet Generation – Gemma Milne, Co-Founder, Science Disrupt

Science Disrupt run regular podcasts, events, a Slack channel for scientists, start ups, VCs, etc. Check out our website. We talk about five focus areas of science. Today I wanted to talk about redesigning science for the internet age. My day job is in journalism and I think a lot about start ups, and to think about how we can influence academia, how success is manifests itself in the internet age.

So, what am I talking about? Things like Pavegen – power generating paving stones. They are all over the news! The press love them! BUT the science does not work, the physics does not work…

I don’t know if you heard about Theranos which promised all sorts of medical testing from one drop of blood, millions of investments, and it all fell apart. But she too had tons of coverage…

I really like science start ups, I like talking about science in a different way… But how can I convince the press, the wider audience what is good stuff, and what is just hype, not real… One of the problems we face is that if you are not engaged in research you either can’t access the science, and can’t read it even if they can access the science… This problem is really big and it influences where money goes and what sort of stuff gets done!

So, how can we change this? There are amazing tools to help (Authorea, overleaf, protocol.io, figshare, publons, labworm) and this is great and exciting. But I feel it is very short term… Trying to change something that doesn’t work anyway… Doing collaborative lab notes a bit better, publishing a bit faster… OK… But is it good for sharing science? Thinking about journalists and corporates, they don’t care about academic publishing, it’s not where they go for scientific information. How do we rethink that… What if we were to rethink how we share science?

AirBnB and Amazon are on my slide here to make the point of the difference between incremental change vs. real change. AirBnB addressed issues with hotels, issues of hotels being samey… They didn’t build a hotel, instead they thought about what people want when they traveled, what mattered for them… Similarly Amazon didn’t try to incrementally improve supermarkets.. They did something different. They dug to the bottom of why something exists and rethought it…

Imagine science was “invented” today (ignore all the realities of why that’s impossible). But imagine we think of this thing, we have to design it… How do we start? How will I ask questions, find others who ask questions…

So, a bit of a thought experiment here… Maybe I’d post a question on reddit, set up my own sub-reddit. I’d ask questions, ask why they are interested… Create a big thread. And if I have a lot of people, maybe I’ll have a Slack with various channels about all the facets around a question, invite people in… Use the group to project manage this project… OK, I have a team… Maybe I create a Meet Up Group for that same question… Get people to join… Maybe 200 people are now gathered and interested… You gather all these folk into one place. Now we want to analyse ideas. Maybe I share my question and initial code on GitHub, find collaborators… And share the code, make it open… Maybe it can be reused… It has been collaborative at every stage of the journey… Then maybe I want to build a microscope or something… I’d find the right people, I’d ask them to join my Autodesk 360 to collaboratively build engineering drawings for fabrication… So maybe we’ve answered our initial question… So maybe I blog that, and then I tweet that…

The point I’m trying to make is, there are so many tools out there for collaboration, for sharing… Why aren’t more researchers using these tools that are already there? Rather than designing new tools… These are all ways to engage and share what you do, rather than just publishing those articles in those journals…

So, maybe publishing isn’t the way at all? I get the “game” but I am frustrated about how we properly engage, and really get your work out there. Getting industry to understand what is going on. There are lots of people inventing in new ways.. YOu can use stuff in papers that isn’t being picked up… But see what else you can do!

So, what now? I know people are starved for time… But if you want to really make that impact, that you think is more interested… I undesrtand there is a concern around scooping… But there are ways to do that… And if you want to know about all these tools, do come talk to me!

Q&A

Q1) I think you are spot on with vision. We want faster more collaborative production. But what is missing from those tools is that they are not designed for researchers, they are not designed for publishing. Those systems are ephemeral… They don’t have DOIs and they aren’t persistent. For me it’s a bench to web pipeline…

A1) Then why not create a persistent archived URI – a webpage where all of a project’s content is shared. 50% of all academic papers are only read by the person that published them… These stumbling blocks in the way of sharing… It is crazy… We shouldn’t just stop and not share.

Q2) Thank you, that has given me a lot of food for thought. The issue of work not being read, I’ve been told that by funders so very relevant to me. So, how do we influence the professors… As a PhD student I haven’t heard about many of those online things…

A2) My co-founder of Science Disrupt is a computational biologist and PhD student… My response would be about not asking, just doing… Find networks, find people doing what you want. Benefit from collaboration. Sign an NDA if needed. Find the opportunity, then come back…

Q3) I had a comment and a question. Code repositories like GitHub are persistent and you can find a great list of code repositories and meta-articles around those on the Journal of Open Research Software. My question was about AirBnB and Amazon… Those have made huge changes but I think the narrative they use now is different from where they started – and they started more as incremental change… And they stumbled on bigger things, which looks a lot like research… So… How do you make that case for the potential long term impact of your work in a really engaging way?

A3) It is the golden question. Need to find case studies, to find interesting examples… a way to showcase similar examples… and how that led to things… Forget big pictures, jump the hurdles… Show that bigger picture that’s there but reduce the friction of those hurdles. Sure those companies were somewhat incremental but I think there is genuinely a really different mindset there that matters.

And we now move to lunch. Coming up…

UNCONFERENCE SESSION 1 

This will be me, so don’t expect an update for the moment…

SESSION TWO: The Early Career Researcher Perspective: Publishing & Research Communication

Getting recognition for all your research outputs – Michael Markie

Make an impact, know your impact, show your impact – Anna Ritchie

How to share science with hard to reach groups and why you should bother – Becky Douglas

What helps or hinders science communication by early career researchers? – Lewis MacKenzie

PANEL DISCUSSION

UNCONFERENCE SESSION 2

SESSION THREE: Raising your research profile: online engagement & metrics

Green, Gold, and Getting out there: How your choice of publisher services can affect your research profile and engagement – Laura Henderson

What are all these dots and what can linking them tell me? – Rachel Lammey

The wonderful world of altmetrics: why researchers’ voices matter – Jean Liu

How to help more people find and understand your work – Charlie Rapple

PANEL DISCUSSION

 

Share/Bookmark

eLearning@ed 2017

Today I am at the eLearning@ed Conference 2017, our annual day-long event for the eLearning community across the University of Edinburgh – including learning technologies, academic staff and some post graduate students. As I’m convener of the community I’m also chairing some sessions today so the notes won’t be at quite my normal pace!

As usual comments, additions and corrections are very welcome. 

For the first two sections I’m afraid I was chairing so there were no notes… But huge thanks to Anne Marie for her excellent quick run through exciting stuff to come… 

Welcome – Nicola Osborne, elearning@ed Convenor

Forthcoming Attractions – Anne Marie Scott, Head of Digital Learning Applications and Media

And with that it was over to our wonderful opening keynote… 

Opening Keynote: Prof. Nicola Whitton, Professor of Professional Learning, Manchester Metropolitan University: Inevitable Failure Assessment? Rethinking higher education through play (Chair: Dr Jill MacKay)

Although I am in education now, my background is as a computer scientist… So I grew up with failure. Do you remember the ZX Spectrum? Loading games there was extremely hit and miss. But the games there – all text based – were brilliant, they worked, they took you on adventures. I played all the games but I don’t think I ever finished one… I’d get a certain way through and then we’d have that idea of catastrophic failure…

And then I met a handsome man… It was unrequited… But he was a bit pixellated… Here was Guybush Threepwood of the Monkey Island series. And that game changed everything – you couldn’t catastrophically fail, it was almost impossible. But in this game you can take risks, you can try things, you can be innovative… And that’s important for me… That space for failure…

The way that we and our students think about failure in Higher Education, and deal with failure in Higher Education. If we think that going through life and never failing, we will be set for disappointment. We don’t laud the failures. J.K. Rowling, biggest author, rejected 12 times. The Beatles, biggest band of the 20th Century, were rejected by record labels many many time. The lightbulb failed hundreds of times! Thomas Edison said he didn’t fail 100 times, he succeeded in lots of stages…

So, to laud failure… Here are some of mine:

  1. Primary 5 junior mastermind – I’m still angry! I chose horses as my specialist subject so, a tip, don’t do that!
  2. My driving test – that was a real resiliance moment… I’ll do it again… I’ll have more lessons with my creepy driving instructor, but I’ll do it again.
  3. First year university exams – failed one exam, by one mark… It was borderline and they said “but we thought you need to fail” – I had already been told off for not attending lectures. So I gave up my summer job, spent the summer re-sitting. I learned that there is only so far you can push things… You have to take things seriously…
  4. Keeping control of a moped – in Thailand, with no training… Driving into walls… And learning when to give up… (we then went by walking and bus)
  5. Funding proposals and article submissions, regularly, too numerous to count – failure is inevitable… As academics we tend not to tell you about all the times we fail… We are going to fail… So we have to be fine to fail and learn from it. I was involved in a Jisc project in 2009… I’ve published most on it… It really didn’t work… And when it didn’t work they funded us to write about that. And I was very lucky, one of the Innovation Programme Managers who had funded us said “hey, if some of our innovation funding isn’t failing, then we aren’t being innovative”. But that’s not what we talk about.

For us, for our students… We have to understand that failure is inevitable. Things are currently set up as failure being a bad outcome, rather than an integral part of the learning process… And learning from failure is really important. I have read something – though I’ve not been able to find it again – that those who pass their driving test on the second attempt are better drives. Failure is about learning. I have small children… They spent their first few years failing to talk then failing to walk… That’s not failure though, it’s how we learn…

Just a little bit of theory. I want to talk a bit about the concept of the magic circle… The Magic Circle came from game theory, from the 1950s. Picked up by ? Zimmerman in early 2000s… The idea is that when you play with someone, you enter this other space, this safe space, where normal rules don’t apply… Like when you see animals playfighting… There is mutual agreement that this doesn’t count, that there are rules and safety… In Chess you don’t just randomly grab the king. Pub banter can be that safe space with different rules applying…

This happens in games, this happens in physical play… How can we create magic circles in learning… So what is that:

  • Freedom to fail – if you won right away, there’s no point in playing it. That freedom to fail and not be constrained by the failure… How we look at failure in games is really different from how we look at failure in Higher Education.
  • Lusory attitude – this is about a willingness to engage in play, to forget about the rules of the real world, to abide by the rules of this new situation. To park real life… To experiment, that is powerful. And that idea came from Leonard Suits whose book, The Grasshopper, is a great Playful Learning read.
  • Intrinsic motivation – this is the key area of magic circle for higher education. The idea that learning can be and should be intrinsically motivating is really really important.

So, how many of you have been in an academic reading group? OK, how many have lasted more than a year? Yeah, they rarely last long… People don’t get round to reading the book… We’ve set up a book group with special rules: you either HAVE To read the book, or your HAVE TO PRETEND that you read the book. We’ve had great turn out, no idea if they all read the books… But we have great discussion… Reframing that book group just a small bit makes a huge difference.

That sort of tiny change can be very powerful for integrating playfulness. We don’t think twice about doing this with children… Part of the issue with play, especially with adults, is what matters about play… About that space to fail. But also the idea of play as a socialised bonding space, for experimentation, for exploration, for possibilities, for doing something else, for being someone else. And the link with motivation is quite well established… I think we need to understand that different kind of play has different potential, but it’s about play and people, and safe play…

This is my theory heavy slide… This is from a paper I’ve just completed with colleagues in Denmark. We wanted to think “what is playful learning”… We talk about Higher Education and playful learning in that context… So what actually is it?

Well there is signature pedagogy for playful learning in higher education, under which we have surface (game) structures; deep (play) structures; implicit (playful) structures. Signature pedagogy could be architecture or engineering…

This came out of work on what students respond to…

So Surface (game) structures includes: ease of entry and explicit progression; appropriate and flexible levels of challenge; engaging game mechanics; physical or digital artefacts. Those are often based around games and digital games… But you can be playful without games…

Deep (play) structures is about: active and physical engagement; collaboration with diversity; imagining possibilities; novelty and surprises.

Implicit (playful) structures: lusory attitude; democratice values and openness; acceptance of risk-taking and failure; intrinsic motivation. That is so important for us in higher education…

So, rant alert…

Higher Education is broken. And that is because schools are broken. I live in Manchester (I know things aren’t as bad in Scotland) and we have assessment all over the place… My daughter is 7 sitting exams. Two weeks of them. They are talking about exams for reception kids – 4 year olds! We have a performative culture of “you will be assessed, you will be assessed”. And then we are surprised when that’s how our students respond… And have the TEF appearing… The golds, silvers, and bronze… Based on fairly random metrics… And then we are surprised when people work to the metrics. I think that assessment is a great way to suck out all the creativity!

So, some questions my kids have recently asked:

  • Are there good viruses? I asked an expert… apparently there are for treating people.. (But they often mutate.)
  • Do mermaids lay eggs? Well they are part fish…
  • Do Snow Leopards eat tomatoes? Where did this question come from? Who knows? Apparently they do eat monkeys… What?!

But contrast that to what my students ask:

  • Will I need to know this for the exam?
  • Are we going to be assessed on that?

That’s what happens when we work to the metrics…

We are running a course where there were two assessments. One was formative… And students got angry that it wasn’t worth credit… So I started to think about what was important about assessment? So I plotted the feedback from low to high, and consequence from low to high… So low consequence, low feedback…

We have the idea of the Trivial Fail – we all do those and it doesn’t matter (e.g. forgetting to signal at a roundabout), and lots of opportunity to fail like that.

We also have the Critical Fail – High Consequence and Low Feedback – kids exams and quite a lot of university assessment fits there.

We also have Serious Fail – High Consequence and High Feedback – I’d put PhD Vivas there… consequences matter… But there is feedback and can be opportunity to manage that.

What we need to focus on in Higher Education is the Micro Fail – low consequence with high feedback. We need students to have that experience, and to value that failure, to value failure without consequence…

So… How on earth do we actually do this? How about we “Level Up” assessment… With bosses at the end of levels… And you keep going until you reach as far as you need to go, and have feedback filled in…

Or the Monkey Island assessment. There is a goal but it doesn’t matter how you get there… You integrate learning and assessment completely, and ask people to be creative…

Easter Egg assessment… Not to do with chocolate but “Easter Eggs” – suprises… You don’t know how you’ll be assessed… Or when you’ll be assessed… But you will be! And it might be fun! So you have to go to lectures… Real life works like that… You can’t know which days will count ahead of time.

Inevitable Failure assessment… You WILL fail first time, maybe second time, third time… But eventually pass… Or even maybe you can’t ever succeed and that’s part of the point.

The point is that failure is inevitable and you need to be able to cope with that and learn from that. On which note… Here is my favourite journal, the Journal of Universal Rejection… This is quite a cathartic experience, they reject everything!

So I wanted to talk about a project that we are doing with some support from the HEA… Eduscapes… Have you played Escape Rooms? They are so addictive! There are lots of people creating educational Escape Rooms… This project is a bit different… So there are three parts… You start by understanding what the Escape Room is, how they work; then some training; and then design a game. But they have to trial them again and again and again. We’ve done this with students, and with high school students three times now. There is inevitable failure built in here… And the project can run over days or weeks or months… But you start with something and try and fail and learn…

This is collaborative, it is creative – there is so much scope to play with, sometimes props, sometimes budget, sometimes what they can find… In the schools case they were maths and Comp Sci students so there was a link to the curriculum. It is not assessed… But other people will see it – that’s quite a powerful motivator… We have done this with reflection/portfolio assessment… That resource is now available, there’s a link, and it’s a really simple way to engage in something that doesn’t really matter…

And while I’m here I have to plug our conference, Playful Learning, now in its second year. We were all about thinking differently about conferences… But always presenting at traditional conferences. So our conference is different… Most of it is hands on, all different stuff, a space to do something different – we had a storytelling in a tent as one of these… Lots of space but nothing really went wrong. But we need something to fail. Applications are closed this year… But there will be a call next year… So play more, be creative, fail!

So, to finish… I’m playful, play has massive potential… But we also have to think about diversity of play, the resilience to play… A lot of the research on playful learning, and assessment doesn’t recognise the importance of gender, race, context, etc… And the importance of the language we use in play… It has nuance, and comes with distinctions… We have to encourage people to play ad get involved. And we really have to re-think assessment – for ourselves, of universities, of students, of school pupils… Until we rethink this, it will be hard to have any real impact for playful learning…

Jill: Thank you so much, that was absolutely brilliant. And that Star Trek reference is “Kobayashi Maru”!

Q&A

Q1) In terms of playful learning and assessment, I was wondering how self-assessment can work?

A1) That brings me back to previous work I have done around reflection… And I think that’s about bringing that reflection into playful assessment… But it’s a hard question… More space and time for reflection, possibly more space for support… But otherwise not that different from other assessment.

Q2) I run a research methods course for an MSc… We tried to invoke playfulness with a fake data set with dragons and princesses… Any other examples of that?

A2) I think that that idea of it being playful, rather than games, is really important. Can use playful images, or data that makes rude shapes when you graph is!

Q3) Nic knows that I don’t play games… I was interested in that difference between gaming and play and playfulness… There is something about games that don’t entice me at all… But that Lusory attitude did feel familiar and appealing… That suspension of disbelief and creativity… And that connection with gendered discussion of play and games.

A3) We are working on a taxonomy of play. That’s quite complex… Some things are clearly play… A game, messing with LEGO… Some things are not play, but can be playful… Crochet… Jigsaw puzzles… They don’t have to be creative… But you can apply that attitude to almost anything. So there is play and there is a playful attitude… That latter part is the key thing, the being prepared to fail…

Q4) Not all games are fun… Easy to think playfulness and games… A lot of games are work… Competitive gaming… Or things like World of Warcraft – your wizard chores. And intensity there… Failure can be quite problematic if working with 25 people in a raid – everyone is tired and angry… That’s not a space where failure is ok… So in terms of what we can learn from games it is important to remember that games aren’t always fun or playful…

A4) Indeed, and not all play is fun… I hate performative play – improv, people touching me… It’s about understanding… It’s really nuanced. It used to be that “students love games because they are fun” and now “students love play because it’s fun” and that’s still missing the point…

Q5) I don’t think you are advocating this but… Thinking about spoonful of sugar making assessment go down… Tricking students into assessment??

A5) No. It’s taking away the consequences in how we think about assessment. I don’t have a problem with exams, but the weight on that, the consequences of failure. It is inevitable in HE that we grade students at different levels… So we have to think about how important assessment is in the real world… We don’t have equivelents of University assessments in the real world… Lets say I do a bid, lots of work, not funded… In real world I try again. If you fail your finals, you don’t get to try again… So it’s about not making it “one go and it’s over”… That’s hard but a big change and important.

Q6) I started in behavioural science in animals… Play there is “you’ll know it when you see it” – we have clear ideas of what other behaviours look like, but play is hard to describe but you know it when you see it… How does that work in your taxonomy…

A6) I have a colleague who is a physical science teacher trainer… And he’s gotten to “you’ll know it when you see it”… Sometimes that is how you perceive that difference… But that’s hard when you apply for grants! It’s a bit of an artificial exercise…

Q7) Can you tell us more about play and cultural diversity, and how we need to think about that in HE?

A7) At the moment we are at the point that people understand and value play in different way. I have a colleague looking at diversity in play… A lot of research previously is on men, and privileged white men… So partly it’s about explaining why you are doing, what you are doing, in the way you are doing it… You have to think beyond that, to appropriateness, to have play in your toolkit…

Q8) You talk about physical spaces and playfulness… How much impact does that have?

A8) It’s not my specialist area but yes, the physical space matters… And you have to think about how to make your space more playful..

Introductions to Break Out Sessions: Playful Learning & Experimentation (Nicola Osborne)

  • Playful Learning – Michael Boyd (10 min)

We are here today with the UCreate Studio… I am the manager of the space, we have student assistants. We also have high school students supporting us too. This pilot runs to the end of July and provides a central Maker Space… To create things, to make things, to generate ideas… This is mixture of the maker movement, we are a space for playful learning through making. There are about 1400 maker spaces world wide, many in Universities in the UK too… Why do they pop up in Universities? They are great creative spaces to learn.

You can get hands on with technology… It is about peer based learning… And project learning… It’s a safe space to fail – it’s non assessed stuff…

Why is it good for learning? Well for instance the World Economic Forum predict that 35% of core professional skills will change from 2015 to 2020. Complex problem solving, critical thinking, creativity, judgement and decision making, cognitive flexibility… These are things that can’t be automated… And can be supported by making and creating…

So, what do we do? We use new technologies, we use technologies that are emerging but not yet widely adopted. And we are educational… That first few months is the hard bit… We don’t lecture much, we are there to help and guide and scaffold. Students can feel confident that they have support if they need it.

And, we are open source! Anyone in the University can use the space, be supported in the space, for free as long as they openly share and license whatever they make. Part of that bigger open ethos.

So, what gets made? Includes academic stuff… Someone made a holder for his spectrometer and 3D printed it. He’s now looking to augment this with his chemistry to improve that design; we have Josie in archeology scanning artefacts and then using that to engage people – using VR; Dimitra in medicine, following a poster project for a cancer monitoring chip, she started prototyping; Hayden in Geosciences is using 3D scanning to see the density of plant matter to understand climate change.

But it’s not just that. Also other stuff… Henry studies architecture, but has a grandfather who needs meds and his family worries if he takes his medicine.. So he’s designed a system that connects a display of that. Then Greg on ECA is looking at projecting memories on people… To see how that helps…

So, I wanted to flag some ideas we can discuss… One of he first projects when I arrived, Fiona Hale and Chris Speed (ECA) ran “Maker Go” had product design students, across the years, to come up with a mobile maker space project… Results were fantastic – a bike to use to scan a space… A way to follow and make paths with paint, to a coffee machine powered by failed crits etc. Brilliant stuff. And afterwards there was a self-organised (first they can remember) exhibtion, Velodrama…

Next up was Edinburgh IoT challenge… Students and academics came together to address challenges set by Council, Uni, etc. Designers, Engineers, Scientists… Led to a really special project, 2 UG students approached us to set yp the new Embedded adn Robotics Society – they run sessions every two weeks. And going strength to strength.

Last but not least… Digital manufacturing IP session trialled last term with Dr Stema Kieria, to explore 3D scanning and printing and the impact on IPs… Huge areas… Echos of taping songs off the radio. Took something real, showed it hands on, learned about technologies, scanned copyright materials, and explored this. They taught me stuff! And that led to a Law and Artificial Intelligence Hackathon in March. This was law and informatics working together, huge ideas… We hope to see them back in the studio soon!

  • Near Future Teaching Vox Pops – Sian Bayne (5 mins)

I am Assistant Vice Principal for Digital Education and I was very keen to look at designing the future of digital education at Edinburgh. I am really excited to be here today… We want you to answer some questions on what teaching will look like in this university in 20 or 30 years time:

  • will students come to campus?
  • will we come to campus?
  • will we have AI tutors?
  • How will teaching change?
  • Will learning analytics trigger new things?
  • How will we work with partner organisations?
  • Will peers accredit each other?
  • Will MOOCs stull exist?
  • Will performance enhancement be routine?
  • Will lectures still exist?
  • Will exams exist?
  • Will essays be marked by software?
  • Will essays exist?
  • Will discipline still exist?
  • Will the VLE still exist?
  • Will we teach in VR?
  • Will the campus be smart? And what does eg IoT to monitor spaces mean socially?
  • Will we be smarter through technology?
  • What values should shape how we change? How we use these technologies?

Come be interviewed for our voxpops! We will be videoing… If you feel brave, come see us!

And now to a break… and our breakout sessions, which were… 

Morning Break Out Sessions

  • Playful Learning Mini Maker Space (Michael Boyd)
  • 23 Things (Stephanie (Charlie) Farley)
  • DIY Film School (Gear and Gadgets) (Stephen Donnelly)
  • World of Warcraft (download/set up information here) (Hamish MacLeod & Clara O’Shea)
  • Near Future Teaching Vox Pops (Sian Bayne)

Presentations: Fun and Games and Learning (Chair: Ruby Rennie, Lecturer, Institute for Education, Teaching and Leadership (Moray House School of Education))

  • Teaching with Dungeons & Dragons – Tom Boylston

I am based in Anthropology and we’ve been running a course on the anthropology of games. And I just wanted to talk about that experience of creating playful teaching and learning. So, Dungeons and Dragons was designed in the 1970s… You wake up, your chained up in a dungeon, you are surrounded by aggressive warriors… And as a player you choose what to do – fight them, talk to them, etc… And you can roll a dice to decide an action, to make the next play. It is always a little bit improvisational, and that’s where the fun comes in!

There are some stigmas around D&D as the last bastion of the nerdy white bloke… But… The situation we had was a 2 hour lecture slot, and I wanted to split that in two. To engage with a reading on the creative opportunities of imagination. I wanted them to make a character, alsmot like creative writing classes, to play that character and see what that felt like, how that changed that… Because part of the fun of role playing is getting to be someone else. Now these games do raise identity issues – gender, race, sexuality… That can be great but it’s not what you want in a big group with people you don’t yet have trust with… But there is something special about being in a space with others, where you don’t know what could happen… It is not a simple thing to take a traditional teaching setting and make it playful… One of the first things we look at when we think about play is people needing to consent to play… And if you impose that on a room, that’s hard…

So early in the course we looked at Erving Goffman’s Frame Analysis, and we used Pictionary cards… We looked at the social cues from the space, the placement of seats, microphones, etc. And then the social cues of play… Some of the foundational work of animal play asks us how you know dogs are playfighting… It’s the half-bite, playful rather than painful… So how do I invite a room full of people to play? I commanded people to play Pictionary, to come up and play… Eventually someone came up… Eventually the room accepted that and the atmosphere changed. It really helped that we had been reading about framing. And I asked what had changed and there were able to think and talk about that…

But D&D… People were sceptical. We started with students making me a character. They made me Englebert, a 5 year old lizard creature… To display the playful situation, a bit silly, to model and frame the situation… Sent them comedy D&D podcasts to listen to and asked them to come back a week later… I promised that we wouldn’t do it every week but… I shared some creative writing approaches to writing a back story, to understand what would matter about this character… Only having done this preparatory work, thought about framing… Only then did I try out my adventure on them… It’s about a masquerade in Camaroon, and children try on others’ masks… I didn’t want to appropriate that. But just to take some cues and ideas and tone from that. And when we got to the role playing, the students were up for it… And we did this either as individual students, or they could pair up…

And then we had a debrief – crucial for a playful experience like this. People said there was more negotiation than they expected as they set up the scene and created. They were surprised how people took care of their characters…

The concluding thing was… At the end of the course I had probably shared more that I cared about. Students interrupted me more – with really great ideas! And students really engaged.

Q&A

Q1) Would you say that D&D would be a better medium than an online role playing game… Exemporisation rather than structured compunction?

A1) We did talk about that… We created a WoW character… There really is a lot of space, unexpected situations you can create in D&D… Lots of improvisation… More happened in that than in the WoW stuff that we did… It was surprisingly great.

Q2) Is that partly about sharing and revealing you, rather than the playfulness per se?

A2) Maybe a bit… But I would have found that hard in another context. The discussion of games really brought that stuff out… It was great and unexpected… Play is the creation of unexpected things…

Q3) There’s a trust thing there… We can’t expect students to trust us and the process, unless we show our trust ourselves…

A3) There was a fair bit of background effort… Thinking about signalling a playful space, and how that changes the space… The playful situations did that without me intending to or trying to!

Digital Game Based Learning in China – Sihan Zhou

I have been finding this event really inspiring… There is so much to think around playfulness. I am from China, and the concept of playful learning is quite new in China so I’m pleased to talk to you about the platform we are creating – Tornado English…

On this platform we have four components – a bilingual animation, a game, and a bilingual chat bot… If the user clicks on the game, they can download it… So far we have created two games: Word Pop – vocabulary learning and Run Rabbit – syntactic learning, both based around Mayer’s model (2011).

The games mechanics are usually understood but comparing user skills and level of challenge – too easy and users will get bored, but if it’s too challenging then users will be frustrated and demotivated. So for apps in China, many of the educational products tend to be more challenging than fun – more educational apps than educational games. So in our games use timing and scoring to make things more playful and interactions like popping bubbles, clicking on moles popping out of holes in the ground. In Word Smash students have to match images to vocab as quickly as possible… In Run Rabbit… The student has to speak a phrase in order get the rabbit to run to the right word in the game and placing it…

When we designed the game, we considered how we could ensure that the game is educationally effective, and to integrate it with the English curriculum in school. We tie to the 2011 English Curriculum Standards for Compulsory Education in China. Students have to complete a sequence of levels to reach the next level of learning – autonomous learning in a systematic way.

So, we piloted this app in China, working with 6 primary schools in Harbin, China. Data has been collected from interviews with teachers, classroom observation, and questionnaires with parents.

This work is a KTP – a Knowledge Transfer Partnership – project and the KTP research is looking at Chinese primary school teachers’ attitudes towards game-based learning. And there is also an MSc TESOL Dissertation looking at teachers attitudes towards game based learning… For instance they may or may not be able to actually use these tools in the classroom because of the way teaching is planned and run. The results of this work will be presented soon – do get in touch.

Our future game development will focus more on a communicative model, task-based learning, and learner autonomy. So the character lands on a new planet, have to find their way, repair their rocket, and return to earth… To complete those task the learner has to develop the appropriate language to do well… But this is all exploratory so do talk to me and to inspire me.

Q&A

Q1) I had some fantastic Chinese students in my playful anthropology course and they were explaining quite mixed attitudes to these approaches in China. Clearly there is that challenge to get authorities to accept it… But what’s the compromise between learning and fun.

A1) The game has features designed for fun… I met with education bureu and teachers, to talk about how this is eduationally effective… Then when I get into classrooms to talk to the students, I focus more on gaming features, why you play it, how you progress and unlock new levels. Emphasis has to be quite different depending on the audience. One has to understand the context.

Q2) How have the kids responded?

A2) They have been really inspired and want to try it out. The kids are 8 or 9 years old… They were keen but also knew that their parents weren’t going to be as happy about playing games in the week when they are supposed to do “homework”. We get data on how this used… We see good use on week days, but huge use on weekends, and longer play time too!

Q3) In terms of changing attitudes to game based learning in China… If you are wanting to test it in Taiwan the attitude was different, we were expected to build playful approaches in…

A3) There is “teaching reform” taking place… And more games and playfulness in the classrooms. But digital games was the problem in terms of triggering a mentality and caution. The new generation uses more elearning… But there is a need to demonstrate that usefulness and take it out to others.

VR in Education – Cinzia Pusceddu-Gangarosa

I am manager of learning technology in the School of Biological Sciences, and also a student on the wonderful MS in Digital Education. I’m going to talk about Virtual Reality in Education.

I wanted to start by defining VR. The definition I like best is from Mirriam Webster. It includes key ideas… the idea of “simulated world” and the ways one engaging with it. VR technologies include headsets like Oculus Rift (high end) through to Google Cardboard (low end) that let you engage… But there is more interesting stuff there too… There are VR “Cave” spaces – where you enter and are surrounded by screens. There are gloves, there are other kinds of experience.

Part of virtual reality is about an intense idea of presence, of being there, of being immersed in the world, fully engaged – so much so that the interface disappears, you forget you are using technologies.

In education VR is not anything new. The first applications were in the 1990s…. But in 200s desktop VR becomes more common – spaces such as Second Life – more acceptable and less costly to engage with.

I want to show you a few examples here… One of the first experiments was from the Institute for Simulation and Training, PA, where students could play “noseball” to play with a virtual ball in a set of wearables. You can see they still use headsets, similar to now but not particularly sophisticated… I also wanted to touch on some other university experiments with VR… The first one is Google Expeditions. This is not a product that has been looked at in universities – it has been trialled in schools a lot… It’s a way to travel in time and space through Google Cardboard… Through the use of apps and tools… And Google supports teachers to use this.

A more interesting experiment is an experiment at Stanford’s Virtual Human Interaction Lab, looking at cognitive effects on students behaviour, and perspective-taking in these spaces, looking at empathy – how VR promotes and encourages empathy. Students impersonating a tree, are more cautious wasting paper. Or impersonating a person has more connection and thoughtfulness about their behaviour to that person… Even an experiment on being a cow and whether that might make them more likely to make them a vegetarian.

Another interesting experiment is at Boston University who are engaging with Ulysses – based on a book but not in a literal way. At Penn State they have been experimenting with VR and tactile experiences.

So, to conclude, what are the strengths of VR in education? Well it is about experience what its not possible – cost, distance, time, size, safety. Also non-symbolic learning (maths, chemistry, etc); learning by doing; and engaging experiences. But there are weaknesses too: it is hard to find a VR designer; it requires technical support; and sometimes VR may not be the right technology – maybe we want to replicate the wrong thing, maybe not innovative enough…

Q&A

Q1) Art Gallery/use in your area?

A1) I would like to do a VR project. It’s hard to understand until you try it out… Most of what I’ve presented is based on what I’ve read and researched, but I would love to explore the topic in a real project.

Q2) With all these technologies, I was wondering if a story is an important accompaniment to the technology and the experience?

A2) I think we do need a story. I don’t think any technology adds value unless we have a vision, and an understanding of full potential of the technology – and what it does differently, and what it really adds to the situation and the story…

Coming up…

Afternoon Keynote: Dr Hamish MacLeod, Senior Lecturer in Digital Education, Institute for Education, Community and Society, Moray House School of Education: Learning with and through Ambiguity (Chair: Cinzia Pusceddu-Gangarosa)

Afternoon Break Out Sessions

  • Playful Learning Mini Maker Space – Michael Boyd)
  • 23 Things – Stephanie (Charlie) Farley
  • DIY Film School (Gear and Gadgets) – Stephen Donnelly
  • Gamifying Wikpedia – Ewan McAndrew
  • Near Future Teaching Vox Pops – Sian Bayne

Presentations

Short 10 minute presentations with 5 minutes for Q&A

  • Learning to Code: A Playful Approach – Areti Manataki
  • Enriched engagement with recorded lectures – John Lee
  • DIY Filmschool and Media Hopper (MoJo) – Stephen Donnelly

Chair: Ross Ward, Learning Technology Advisor (ISG Learning, Teaching & Web Services)

Closing Remarks – Prof. Sian Bayne, Moray House School of Education

Share/Bookmark

IIPC WAC / RESAW Conference 2017 – Day Three Liveblog

It’s the final day of the IIPC/RESAW conference in London. See my day one and day two post for more information on this. I’m back in the main track today and, as usual, these are live notes so comments, additions, corrections, etc. all welcome.

Collection development panel (Chair: Nicola Bingham)

James R. Jacobs, Pamela M. Graham & Kris Kasianovitz: What’s in your web archive? Subject specialist strategies for collection development

We’ve been archiving the web for many years but the need for web archiving really hit home for me in 2013 when NASA took down every one of their technical reports – for review on various grounds. And the web archiving community was very concerned. Michael Nelson said in a post “NASA information is too important to be left on nasa.gov computers”. And I wrote about when we rely on pointing not archiving.

So, as we planned for this panel we looked back on previous IIPC events and we didn’t see a lot about collection curation. We posed three topics all around these areas. So for each theme we’ll watch a brief screen cast by Kris to introduce them…

  1. Collection development and roles

Kris (via video): I wanted to talk about my role as a subject specialist and how collection development fits into that. AS a subject specialist that is a core part of the role, and I use various tools to develop the collection. I see web archiving as absolutely being part of this. Our collection is books, journals, audio visual content, quantitative and qualitative data sets… Web archives are just another piece of the pie. And when we develop our collection we are looking at what is needed now but in anticipation of what we be needed 10 or 20 years in the future, building a solid historical record that will persist in collections. And we think about how our archives fit into the bigger context of other archives around the country and around the world.

For the two web archives I work on – CA.gov and the Bay Area Governments archives – I am the primary person engaged in planning, collecting, describing and making available that content. And when you look at the web capture life cycle you need to ensure the subject specialist is included and their role understood and valued.

The CA.gov archive involves a group from several organisations including the government library. We have been archiving since 2007 in the California Digital Library initially. We moved into Archive-It in 2013.

The Bay Area Governments archives includes materials on 9 counties, but primarily and comprehensively focused on two key counties here. We bring in regional governments and special districts where policy making for these areas occur.

Archiving these collections has been incredibly useful for understanding government, their processes, how to work with government agencies and the dissemination of this work. But as the sole responsible person that is not ideal. We have had really good technical support from Internet Archive around scoping rules, problems with crawls, thinking about writing regular expressions, how to understand and manage what we see from crawls. We’ve also benefitted from working with our colleague Nicholas Taylor here at Stanford who wrote a great QA report which has helped us.

We are heavily reliant on crawlers, on tools and technologies created by you and others, to gather information for our archive. And since most subject selectors have pretty big portfolios of work – outreach, instruction, as well as collection development – we have to have good ties to developers, and to the wider community with whom we can share ideas and questions is really vital.

Pamela: I’m going to talk about two Columbia archives, the Human Rights Web Archive (HRWA) and Historic Preservation and Urban Planning. I’d like to echo Kris’ comments about the importance of subject specialists. The Historic Preservation and Urban Planning archive is led by our architecture subject specialist and we’d reached a point where we had to collect web materials to continue that archive – and she’s done a great job of bringing that together. Human Rights seems to have long been networked – using the idea of the “internet” long before the web and hypertext. We work closely with Alex Thurman, and have an additional specially supported web curator, but there are many more ways to collaborate and work together.

James: I will also reflect on my experience. And the FDLP – Federal Library Program – involves libraries receiving absolutely every government publications in order to ensure a comprehensive archive. There is a wider programme allowing selective collection. At Stanford we are 85% selective – we only weed out content (after five years) very lightly and usually flyers etc. As a librarian I curate content. As an FDLP library we have to think of our collection as part of the wider set of archives, and I like that.

As archivists we also have to understand provenance… How do we do that with the web archive. And at this point I have to shout out to Jefferson Bailey and colleagues for the “End of Term” collection – archiving all gov sites at the end of government terms. This year has been the most expansive, and the most collaborative – including FTP and social media. And, due to the Trump administration’s hostility to science and technology we’ve had huge support – proposals of seed sites, data capture events etc.

2. Collection Development approaches to web archives, perspectives from subject specialists

As subject specialists we all have to engage in collection development – there are no vendors in this space…

Kris: Looking again at the two government archives I work on there is are Depository Program Statuses to act as a starting point… But these haven’t been updated for the web. However, this is really a continuation of the print collection programme. And web archiving actually lets us collect more – we are no longer reliant on agencies putting content into the Depository Program.

So, for CA.gov we really treat this as a domain collection. And no-one really doing this except some UCs, myself, and state library and archives – not the other depository libraries. However, we don’t collect think tanks, or the not-for-profit players that influence policy – this is for clarity although this content provides important context.

We also had to think about granularity… For instance for the CA transport there is a top level domain and sub domains for each regional transport group, and so we treat all of these as seeds.

Scoping rules matter a great deal, partly as our resources are not unlimited. We have been fortunate that with the CA.gov archive that we have about 3TB space for this year, and have been able to utilise it all… We may not need all of that going forwards, but it has been useful to have that much space.

Pamela: Much of what Kris has said reflects our experience at Columbia. Our web archiving strengths mirror many of our other collection strengths and indeed I think web archiving is this important bridge from print to fully digital. I spent some time talking with our librarian (Chris) recently, and she will add sites as they come up in discussion, she monitors the news for sites that could be seeds for our collection… She is very integrated in her approach to this work.

For the human rights work one of the challenges is the time that we have to contribute. And this is a truly interdisciplinary area with unclear boundaries, and those are both challenging aspects. We do look at subject guides and other practice to improve and develop our collections. And each fall we sponsor about two dozen human rights scholars to visit and engage, and that feeds into what we collect… The other thing that I hope to do in the future is to do more assessment to look at more authoritative lists in order to compare with other places… Colleagues look at a site called ideallist which lists opportunities and funding in these types of spaces. We also try to capture sites that look more vulnerable – small activist groups – although it is nt clear if they actually are that risky.

Cost wise the expensive part of collecting is both human effort to catalogue, and the permission process in the collecting process. And yesterday’s discussion of possible need for ethics groups as part of the permissions prpcess.

In the web archiving space we have to be clearer on scope and boundaries as there is such a big, almost limitless, set of materials to pick from. But otherwise plenty of parallels.

James: For me the material we collect is in the public domain so permissions are not part of my challenge here. But there are other aspects of my work, including LOCKSS. In the case of Fugitive US Agencies Collection we take entire sites (e.g. CBO, GAO, EPA) plus sites at risk (eg Census, Current Industrial Reports). These “fugitive” agencies include publications should be in the depository programme but are not. And those lots documents that fail to make it out, they are what this collection is about. When a library notes a lost document I will share that on the Lost Docs Project blog, and then also am able to collect and seed the cloud and web archive – using the WordPress Amber plugin – for links. For instance the CBO looked at the health bill, aka Trump Care, was missing… In fact many CBO publications were missing so I have added it as a see for our Archive-it

3. Discovery and use of web archives

Discovery and use of web archives is becoming increasingly important as we look for needles in ever larger haystacks. So, firstly, over to Kris:

Kris: One way we get archives out there is in our catalogue, and into WorldCat. That’s one plae to help other libraries know what we are collecting, and how to find and understand it… So would be interested to do some work with users around what they want to find and how… I suspect it will be about a specific request – e.g. city council in one place over a ten year period… But they won’t be looking for a web archive per se… We have to think about that, and what kind of intermediaries are needed to make that work… Can we also provide better seed lists and documentation for this? In Social Sciences we have the Code Book and I think we need to share the equivalent information for web archives, to expose documentation on how the archive was built… And linking to seeds nad other parts of collections .

One other thing we have to think about is process and document ingest mechanism. We are trying to do this for CA.gov to better describe what we do… BUt maybe there is a standard way to produce that sort of documentation – like the Codebook…

Pamela: Very quickly… At Columbia we catalogue individual sites. We also have a customised portal for the Human Rights. That has facets for “search as research” so you can search and develop and learn by working through facets – that’s often more useful than item searches… And, in terms of collecting for the web we do have to think of what we collect as data for analysis as part of a larger data sets…

James: In the interests of time we have to wrap up, but there was one comment I wanted to make.which is that there are tools we use but also gaps that we see for subject specialists [see slide]… And Andrew’s comments about the catalogue struck home with me…

Q&A

Q1) Can you expand on that issue of the catalogue?

A1) Yes, I think we have to see web archives both as bulk data AND collections as collections. We have to be able to pull out the documents and reports – the traditional materials – and combine them with other material in the catalogue… So it is exciting to think about that, about the workflow… And about web archives working into the normal library work flows…

Q2) Pamela, you commented about permissions framework as possibly vital for IRB considerations for web research… Is that from conversations with your IRB or speculative.

A2) That came from Matt Webber’s comment yesterday on IRB becoming more concerned about web archive-based research. We have been looking for faster processes… But I am always very aware of the ethical concern… People do wonder about ethics and permissions when they see the archive… Interesting to see how we can navigate these challenges going forward…

Q3) Do you use LCSH and are there any issues?

A3) Yes, we do use LCSH for some items and the collections… Luckily someone from our metadata team worked with me. He used Dublin Core, with LCSH within that. He hasn’t indicated issues. Government documents in the US (and at state level) typically use LCSH so no, no issues that I’m aware of.

 

Share/Bookmark

IIPC WAC / RESAW Conference 2017 – Day Two (Technical Strand) Liveblog

I am again at the IIPC WAC / RESAW Conference 2017 and, for today I am

Tools for web archives analysis & record extraction (chair Nicholas Taylor)

Digging documents out of the archived web – Andrew Jackson

This is the technical counterpoint to the presentation I gave yesterday… So I talked yesterday about the physical workflow of catalogue items… We found that the Digital ePrints team had started processing eprints the same way…

  • staff looked in an outlook calendar for reminders
  • looked for new updates since last check
  • download each to local folder and open
  • check catalogue to avoid re-submitting
  • upload to internal submission portal
  • add essential metadata
  • submit for ingest
  • clean up local files
  • update stats sheet
  • Then inget usually automated (but can require intervention)
  • Updates catalogue once complete
  • New catalogue records processed or enhanced as neccassary.

It was very manual, and very inefficient… So we have created a harvester:

  • Setup: specify “watched targets” then…
  • Harvest (harvester crawl targets as usual) –> Ingested… but also…
  • Document extraction:
    • spot documents in the crawl
    • find landing page
    • extract machine-readable metadata
    • submit to W3ACT (curation tool) for review
  • Acquisition:
    • check document harvester for new publications
    • edit essemtial metaddta
    • submit to catalogue
  • Cataloguing
    • cataloguing records processed as neccassry

This is better but there are challenges. Firstly, what is a “publication?”. With the eprints team there was a one-to-one print and digital relationship. But now, no more one-to-one. For example, gov.uk publications… An original report will has an ISBN… But that landing page is a representation of the publication, that’s where the assets are… When stuff is catalogued, what can frustrate technical folk… You take date and text from the page – honouring what is there rather than normalising it… We can dishonour intent by capturing the pages… It is challenging…

MARC is initially alarming… For a developer used to current data formats, it’s quite weird to get used to. But really it is just encoding… There is how we say we use MARC, how we do use MARC, and where we want to be now…

One of the intentions of the metadata extraction work was to proide an initial guess of the catalogue data – hoping to save cataloguers and curators time. But you probably won’t be surprised that the names of authors’ names etc. in the document metadata is rarely correct. We use the worse extractor, and layer up so we have the best shot. What works best is extracting the HTML. Gov.uk is a big and consistent publishing space so it’s worth us working on extracting that.

What works even better is the gov.uk API data – it’s in JSON, it’s easy to parse, it’s worth coding as it is a bigger publisher for us.

But now we have to resolve references… Multiple use cases for “records about this record”:

  • publisher metadata
  • third party data sources (e.g. Wikipedia)
  • Our own annotations and catalogues
  • Revisit records

We can’t ignore the revisit records… Have to do a great big join at some point… To get best possible quality data for every single thing….

And this is where the layers of transformation come in… Lots of opportunities to try again and build up… But… When I retry document extraction I can accidentally run up another chain each time… If we do our Solaar searches correctly it should be easy so will be correcting this…

We do need to do more future experimentation.. Multiple workflows brings synchronisation problems. We need to ensure documents are accessible when discocerale. Need to be able to re-run automated extraction.

We want to iteractively ipmprove automated metadat extraction:

  • improve HTML data extraction rules, e.g. Zotero translators (and I think LOCKSS are working on this).
  • Bring together different sources
  • Smarter extractors – Stanford NER, GROBID (built for sophisticated extraction from ejournals)

And we still have that tension between what a publication is… A tension between established practice and publisher output Need to trial different approaches with catalogues and users… Close that whole loop.

Q&A

Q1) Is the PDF you extract going into another repository… You probably have a different preservation goal for those PDFs and the archive…

A1) Currently the same copy for archive and access. Format migration probably will be an issue in the future.

Q2) This is quite similar to issues we’ve faced in LOCKSS… I’ve written a paper with Herbert von de Sompel and Michael Nelson about this thing of describing a document…

A2) That’s great. I’ve been working with the Government Digital Service and they are keen to do this consistently….

Q2) Geoffrey Bilder also working on this…

A2) And that’s the ideal… To improve the standards more broadly…

Q3) Are these all PDF files?

A3) At the moment, yes. We deliberately kept scope tight… We don’t get a lot of ePub or open formats… We’ll need to… Now publishers are moving to HTML – which is good for the archive – but that’s more complex in other ways…

Q4) What does the user see at the end of this… Is it a PDF?

A4) This work ends up in our search service, and that metadata helps them find what they are looking for…

Q4) Do they know its from the website, or don’t they care?

A4) Officially, the way the library thinks about monographs and serials, would be that the user doesn’t care… But I’d like to speak to more users… The library does a lot of downstream processing here too..

Q4) For me as an archivist all that data on where the document is from, what issues in accessing it they were, etc. would extremely useful…

Q5) You spoke yesterday about engaging with machine learning… Can you say more?

A5) This is where I’d like to do more user work. The library is keen on subject headings – thats a big high level challenge so that’s quite amenable to machine learning. We have a massive golden data set… There’s at least a masters theory in there, right! And if we built something, then ran it over the 3 million ish items with little metadata could be incredibly useful. In my 0pinion this is what big organisations will need to do more and more of… making best use of human time to tailor and tune machine learning to do much of the work…

Comment) That thing of everything ending up as a PDF is on the way out by the way… You should look at Distil.pub – a new journal from Google and Y combinator – and that’s the future of these sorts of formats, it’s JavaScript and GitHub. Can you collect it? Yes, you can. You can visit the page, switch off the network, and it still works… And it’s there and will update…

A6) As things are more dynamic the re-collecting issue gets more and more important. That’s hard for the organisation to adjust to.

Nick Ruest & Ian Milligan: Learning to WALK (Web Archives for Longitudinal Knowledge): building a national web archiving collaborative platform

Ian: Before I start, thank you to my wider colleagues and funders as this is a collaborative project.

So, we have a fantastic web archival collections in Canada… They collect political parties, activist groups, major events, etc. But, whilst these are amazing collections, they aren’t acessed or used much. I think this is mainly down to two issues: people don’t know they are there; and the access mechanisms don’t fit well with their practices. Maybe when the Archive-it API is live that will fix it all… Right now though it’s hard to find the right thing, and the Canadian archive is quite siloed. There are about 25 organisations collecting, most use the Archive-It service. But, if you are a researcher… to use web archives you really have to interested and engaged, you need to be an expert.

So, building this portal is about making this easier to use… We want web archives to be used on page 150 in some random book. And that’s what the WALK project is trying to do. Our goal is to break down the silos, take down walls between collections, between institutions. We are starting out slow… We signed Memoranda of Understanding with Toronto, Alberta, Victoria, Winnipeg, Dalhousie, SImon Fraser University – that represents about half of the archive in Canada.

We work on workflow… We run workshops… We separated the collections so that post docs can look at this

We are using Warcbase (warcbase.org) and command line tools, we transferred data from internet archive, generate checksums; we generate scholarly derivatives – plain text, hypertext graph, etc. In the front end you enter basic information, describe the collection, and make sure that the user can engage directly themselves… And those visualisations are really useful… Looking at visualisation of the Canadan political parties and political interest group web crawls which track changes, although that may include crawler issues.

Then, with all that generated, we create landing pages, including tagging, data information, visualizations, etc.

Nick: So, on a technical level… I’ve spent the last ten years in open source digital repository communities… This community is small and tightknit, and I like how we build and share and develop on each others work. Last year we presented webarchives.ca. We’ve indexed 10 TB of warcs since then, representing 200+ M Solr docs. We have grown from one collection and we have needed additional facets: institution; collection name; collection ID, etc.

Then we have also dealt with scaling issues… 30-40Gb to 1Tb sized index. You probably think that’s kinda cute… But we do have more scaling to do… So we are learning from others in the community about how to manage this… We have Solr running on an Open Stack… But right now it isn’t at production scale, but getting there. We are looking at SolrCloud and potentially using a Shard2 per collection.

Last year we had a solr index using the Shine front end… It’s great but… it doesn’t have an active open source community… We love the UK Web Archive but… Meanwhile there is BlackLight which is in wide use in libraries. There is a bigger community, better APIs, bug fixees, etc… So we have set up a prototype called WARCLight. It does almost all that Shine does, except the tree structure and the advanced searching..

Ian spoke about dericative datasets… For each collection, via Blacklight or ScholarsPortal we want domain/URL Counts; Full text; graphs. Rather than them having to do the work, they can just engage with particular datasets or collections.

So, that goal Ian talked about: one central hub for archived data and derivatives…

Q&A

Q1) Do you plan to make graphs interactive, by using Kebana rather than Gephi?

A1 – Ian) We tried some stuff out… One colleague tried R in the browser… That was great but didn’t look great in the browser. But it would be great if the casual user could look at drag and drop R type visualisations. We haven’t quite found the best option for interactive network diagrams in the browser…

A1 – Nick) Generally the data is so big it will bring down the browser. I’ve started looking at Kabana for stuff so in due course we may bring that in…

Q2) Interesting as we are doing similar things at the BnF. We did use Shine, looked at Blacklight, but built our own thing…. But we are looking at what we can do… We are interested in that web archive discovery collections approaches, useful in other contexts too…

A2 – Nick) I kinda did this the ugly way… There is a more elegant way to do it but haven’t done that yet..

Q2) We tried to give people WARC and WARC files… Our actual users didn’t want that, they want full text…

A2 – Ian) My students are quite biased… Right now if you search it will flake out… But by fall it should be available, I suspect that full text will be of most interest… Sociologists etc. think that network diagram view will be interesting but it’s hard to know what will happen when you give them that. People are quickly put off by raw data without visualisation though so we think it will be useful…

Q3) Do you think in few years time

A3) Right now that doesn’t scale… We want this more cloud-based – that’s our next 3 years and next wave of funded work… We do have capacity to write new scripts right now as needed, but when we scale that will be harder,,,,

Q4) What are some of the organisational, admin and social challenges of building this?

A4 – Nick) Going out and connecting with the archives is a big part of this… Having time to do this can be challenging…. “is an institution going to devote a person to this?”

A4 – Ian) This is about making this more accessible… People are more used to Backlight than Shine. People respond poorly to WARC. But they can deal with PDFs with CSV, those are familiar formats…

A4 – Nick) And when I get back I’m going to be doing some work and sharing to enable an actual community to work on this..

 

Share/Bookmark

Somewhere over the Rainbow: our metadata online, past, present & future

Today I’m at the Cataloguing and Indexing Group Scotland event – their 7th Metadata & Web 2.0 event – Somewhere over the Rainbow: our metadata online, past, present & future.

Paul Cunnea, CIGS Chair is introducing the day noting that this is the 10th year of these events: we don’t have one every year but we thought we’d return to our Wizard of Oz theme.

On a practical note, Paul notes that if we have a fire alarm today we’d normally assemble outside St Giles Cathedral but as they are filming The Avengers today, we’ll be assembling elsewhere!

There is also a cupcake competition today – expect many baked goods to appear on the hashtag for the day #cigsweb2. The winner takes home a copy of Managing Metadata in Web-scale Discovery Systems / edited by Louise F Spiteri. London : Facet Publishing, 2016 (list price £55).

Engaging the crowd: old hands, modern minds. Evolving an on-line manuscript transcription project / Steve Rigden with Ines Byrne (not here today) (National Library of Scotland)

 

Ines has led the development of our crowdsourcing side. My role has been on the manuscripts side. Any transcription is about discovery. For the manuscripts team we have to prioritise digitisation so that we can deliver digital surrogates that enable access, and to open up access. Transcription hugely opens up texts but it is time consuming and that time may be better spent on other digitisation tasks.

OCR has issues but works relatively well for printed texts. Manuscripts are a different matter – handwriting, ink density, paper, all vary wildly. The REED(?) project is looking at what may be possible but until something better comes along we rely on human effort. Generally the manuscript team do not undertake manual transcription, but do so for special exhibitions or very high priority items. We also have the challenge that so much of our material is still under copyright so cannot be done remotely (but can be accessed on site). The expected user community generally can be expected to have the skill to read the manuscript – so a digital surrogate replicates that experience. That being said, new possibilities shape expectations. So we need to explore possibilities for transcription – and that’s where crowd sourcing comes in.

Crowd sourcing can resolve transcription, but issues with copyright and data protection still have to be resolved. It has taken time to select suitable candidates for transcription. In developing this transcription project we looked to other projects – like Transcribe Bentham which was highly specialised, through to projects with much broader audiences. We also looked at transcription undertaken for the John Murray Archive, aimed at non specialists.

The selection criteria we decided upon was for:

  • Hands that are not too troublesome.
  • Manuscripts that have not been re-worked excessively with scoring through, corrections and additions.
  • Documents that are structurally simple – no tables or columns for example where more complex mark-up (tagging) would be required.
  • Subject areas with broad appeal: genealogies, recipe book (in the old crafts of all kinds sense), mountaineering.

Based on our previous John Murray Archive work we also want the crowd to provide us with structure text, so that it can be easily used, by tagging the text. That’s an approach that is borrowed from Transcribe Bentham, but we want our community to be self-correcting rather than doing QA of everything going through. If something is marked as finalised and completed, it will be released with the tool to a wider public – otherwise it is only available within the tool.

The approach could be summed up as keep it simple – and that requires feedback to ensure it really is simple (something we did through a survey). We did user testing on our tool, it particularly confirmed that users just want to go in, use it, and make it intuitive – that’s a problem with transcription and mark up so there are challenges in making that usable. We have a great team who are creative and have come up with solutions for us… But meanwhile other project have emerged. If the REED project is successful in getting machines to read manuscripts then perhaps these tools will become redundant. Right now there is nothing out there or in scope for transcribing manuscripts at scale.

So, lets take a look at Transcribe NLS

You have to login to use the system. That’s mainly to help restrict the appeal to potential malicious or erroneous data. Once you log into the tool you can browse manuscripts, you can also filter by the completeness of the transcription, the grade of the transcription – we ummed and ahhed about including that but we though it was important to include.

Once you pick a text you click the button to begin transcribing – you can enter text, special characters, etc. You can indicate if text is above/below the line. You can mark up where the figure is. You can tag whether the text is not in English. You can mark up gaps. You can mark that an area is a table. And you can also insert special characters. It’s all quite straight forward.

Q&A

Q1) Do you pick the transcribers, or do they pick you?

A1) Anyone can take part but they have to sign up. And they can indicate a query – which comes to our team. We do want to engage with people… As the project evolves we are looking at the resources required to monitor the tool.

Q2) It’s interesting what you were saying about copyright…

A2) The issues of copyright here is about sharing off site. A lot of our manuscripts are unpublished. We use exceptions such as the 1956 Copyright Act for old works whose authors had died. The selection process has been difficult, working out what can go in there. We’ve also cheated a wee bit

Q3) What has the uptake of this been like?

A3) The tool is not yet live. We thin it will build quite quickly – people like a challenge. Transcription is quite addictive.

Q4) Are there enough people with palaeography skills?

A4) I think that most of the content is C19th, where handwriting is the main challenge. For much older materials we’d hit that concern and would need to think about how best to do that.

Q5) You are creating these documents that people are reading. What is your plan for archiving these.

A5) We do have a colleague considering and looking at digital preservation – longer term storage being more the challenge. As part of normal digital preservation scheme.

Q6) Are you going for a Project Gutenberg model? Or have you spoken to them?

A6) It’s all very localised right now, just seeing what happens and what uptake looks like.

Q7) How will this move back into the catalogue?

A7) Totally manual for now. It has been the source of discussion. There was discussion of pushing things through automatically once transcribed to a particular level but we are quite cautious and we want to see what the results start to look like.

Q8) What about tagging with TEI? Is this tool a subset of that?

A8) There was a John Murray Archive, including mark up and tagging. There was a handbook for that. TEI is huge but there is also TEI Light – the JMA used a subset of the latter. I would say this approach – that subset of TEI Light – is essentially TEI Very Light.

Q9) Have other places used similar approaches?

A9) TRanscribe Bentham is similar in terms of tagging. The University of Iowa Civil War Archive has also had a similar transcription and tagging approach.

Q10) The metadata behind this – how significant is that work?

A10) We have basic metadata for these. We have items in our digital object database and simple metadata goes in there – we don’t replicate the catalogue record but ensure it is identifiable, log date of creation, etc. And this transcription tool is intentionally very basic at th emoment.

Coming up later…

Can web archiving the Olympics be an international team effort? Running the Rio Olympics and Paralympics project / Helena Byrne (British Library)

Managing metadata from the present will be explored by Helena Byrne from the British Library, as she describes the global co-ordination of metadata required for harvesting websites for the 2016 Olympics, as part of the International Internet Preservation Consortium’s Rio 2016 web archiving project

Statistical Accounts of Scotland / Vivienne Mayo (EDINA)

Vivienne Mayo from EDINA describes how information from the past has found a new lease of life in the recently re-launched Statistical Accounts of Scotland

Lunch

Beyond bibliographic description: emotional metadata on YouTube / Diane Pennington (University of Strathclyde)

Diane Pennington of Strathclyde University will move beyond the bounds of bibliographic description as she discusses her research about emotions shared by music fans online and how they might be used as metadata for new approaches to search and retrieval

Our 5Rights: digital rights of children and young people / Dev Kornish, Dan Dickson, Bethany Wilson (5Rights Youth Commission)

Young Scot, Scottish Government and 5Rights introduce Scotland’s 5Rights Youth Commission – a diverse group of young people passionate about their digital rights. We will hear from Dan and Bethany what their ‘5Rights’ mean to them, and how children and young people can be empowered to access technology, knowledgeably, and fearlessly.

Playing with metadata / Gavin Willshaw and Scott Renton (University of Edinburgh)

Learn about Edinburgh University Library’s metadata games platform, a crowdsourcing initiative which has improved descriptive metadata and become a vital engagement tool both within and beyond the library. Hear how they have developed their games in collaboration with Tiltfactor, a Dartmouth College-based research group which explores game design for social change, and learn what they’re doing with crowd-sourced data. There may even be time for you to set a new high score…

Managing your Digital Footprint : Taking control of the metadata and tracks and traces that define us online / Nicola Osborne (EDINA)

Find out how personal metadata, social media posts, and online activity make up an individual’s “Digital Footprint”, why they matter, and hear some advice on how to better manage digital tracks and traces. Nicola will draw on recent University of Edinburgh research on students’ digital footprints which is also the subject of the new #DFMOOC free online course.

16:00 Close

Sticking with the game theme, we will be running a small competition on the day, involving cupcakes, book tokens and tweets – come to the event to find out more! You may be lucky enough to win a copy of Managing Metadata in Web-scale Discovery Systems / edited by Louise F Spiteri. London : Facet Publishing, 2016 – list price £55! What more could you ask for as a prize?

The ticket price includes refreshments and a light buffet lunch.

We look forward to seeing you in April!

Share/Bookmark

Last chance to submit for the “Social Media in Education” Mini Track for the 4th European Conference on Social Media (ECSM) 2017

This summer I will be co-chairing, with Stefania Manca (from The Institute of Educational Technology of the National Research Council of Italy) “Social Media in Education”, a Mini Track of the European Conference on Social Median (#ECSM17) in Vilnius, Lithuania. As the call for papers has been out for a while (deadline for abstracts: 12th December 2016) I wanted to remind and encourage you to consider submitting to the conference and, particularly, for our Mini Track, which we hope will highlight exciting social media and education research.

You can download the Mini Track Call for Papers on Social Media in Education here. And, from the website, here is the summary of what we are looking for:

An expanding amount of social media content is generated every day, yet organisations are facing increasing difficulties in both collecting and analysing the content related to their operations. This mini track on Big Social Data Analytics aims to explore the models, methods and tools that help organisations in gaining actionable insight from social media content and turning that to business or other value. The mini track also welcomes papers addressing the Big Social Data Analytics challenges, such as, security, privacy and ethical issues related to social media content. The mini track is an important part of ECSM 2017 dealing with all aspects of social media and big data analytics.

Topics of the mini track include but are not limited to:

  • Reflective and conceptual studies of social media for teaching and scholarly purposes in higher education.
  • Innovative experience or research around social media and the future university.
  • Issues of social media identity and engagement in higher education, e.g: digital footprints of staff, students or organisations; professional and scholarly communications; and engagement with academia and wider audiences.
  • Social media as a facilitator of changing relationships between formal and informal learning in higher education.
  • The role of hidden media and backchannels (e.g. SnapChat and YikYak) in teaching, learning.
  • Social media and the student experience.

The conference, the 4th European Conference on Social Media (ECSM) will be taking place at the Business and Media School of the Mykolas Romeris University (MRU) in Vilnius, Lithuania on the 3-4 July 2017. Having seen the presentation on the city and venue at this year’s event I feel confident it will be lovely setting and should be a really good conference. (I also hear Vilnius has exceptional internet connectivity, which is always useful).

I would also encourage anyone working in social media to consider applying for the Social Media in Practice Excellence Awards, which ECSM is hosting this year. The competition will be showcasing innovative social media applications in business and the public sector, and they are particularly looking for ways in which academia have been working with business around social media. You can read more – and apply to the competition (deadline for entries: 17th January 2017)- here.

This is a really interdisciplinary conference with a real range of speakers and topics so a great place to showcase interesting applications of and research into social media. The papers presented at the conference are published in the conference proceedings, widely indexed, and will also be considered for publication in: Online Information Review (Emerald Insight, ISSN: 1468-4527); International Journal of Social Media and Interactive Learning Environments (Inderscience, ISSN 2050-3962); International Journal of Web-Based Communities (Inderscience); Journal of Information, Communication and Ethics in Society (Emerald Insight, ISSN 1477-996X).

So, get applying to the conference  and/or to the competition! If you have any questions or comments about the Social Media in Education track, do let me know.

Share/Bookmark