Jisc Digifest 2016 – Day Two Live blog

Today I am in Birmingham for day two of Jisc Digifest 2016 (catch up on Day One here). I’m particularly hear wearing my Jisc’s 50 most influential higher education (HE) professionals using social media hat, helping to share the event with the wider sector who aren’t able to be at the ICC.

There is also an online programme so, if you aren’t here in person, you can not only follow the tweets on #digifest16 and the various blogs, you can also view special online content here.

As usual, this is a liveblog so all corrections, additions, comments, etc. are very welcome. 

At the moment my expected schedule for day one (this will be updated throughout the day) is:

9.00 – 10.00 Plenaries – the power of digital for teaching and learning

The chair for this session is Sarah Davies, head of change implementation support – education/student, Jisc.

Heather MacDonald, principal, Loughborough College

I have missed the beginning of Heather’s talk, so catching up as she addresses the issue of Area Reviews in FE… Heather is talking about the uncertainty of mergers, and of needing to be confident going forward, ready to embrace a technology led future.

Technology, however, is also a real and substantial job threat. But this intelligence is only artificial – until recently it took huge amounts of computation to recognise an image of a cat. We need to get out there and teach to create the next generation of creative and innovative future employees. We need to address the needs of this changing world through chnging pedagogies, through empowering students – perhaps to organise and teach themselves. But what would Ofsted say about that? Well, it matters, a good Ofsted report is very important for FE Colleges, but I would rather have creative and innovative teaching methods. That means we have to, as Tim Marshall said last night, bring the regulators up to speed more rapidly. We should be looking for solutions through the digital lens of technology

Professor John Traxler, professor of mobile learning, Institute of Education, University of Wolverhampton 

Prior to today some of what I will say has been pre-trailed on the blog. I was quoted as saying that “mobile learning” has stalled… But I essentially want to raise the issue of “mobile learning” and just the regular matter of learning with the tools that we have. I was making that distinction around a couple of issues… One is that the money had run out, and that money and that will had fuelled the rhetoric of what we did with innovation in the first decade of this century; the second is the developments and changes in mobile technology itself. About 15 years ago mobile was delicate, fragile, expensive, scarce, something for institutions, and to promulgate their solutions. But the money ran out. And we also focused too much on what we were building, less on who we were building it for… But meanwhile mobile has made the transition to cheap, robust, easy, universal, personal. It’s hardly notable anymore. And whatever constitutes mobile learning now is not driven from the top, but by our students. And the technology moves fast but social practices and behaviours moves even faster, and that’s the harder thing to keep up with. People share, disrupt, discuss… That happens outside the institution…Or inside the institution but on an individual basis.

This technology is part of this fluet, transient, flexible, partial world. It enables people to help each other to learn. And web access is significantly moving to mobile devices rather than desktop machines. But what does that do for the roles of educational designers, teachers, etc. What people call “phone space” is very different to cyber space. Cyber space is a permitted space, back to the world. Whereas phone space is multimodal, you are having conversations, doing other things, crossing roads, travelling… And this is a very different learning space from a student sat at a computer.

Now, looking back I’d consider “mobile learning” rather backward looking, something of the last decade. I think that we, as professional educators, need to look outwards and forwards… And think about how we deal with this issue of abundance – how do we develop the criticality in our students to manage that. And we should question why they still come to us for face to face experiences, and to think about what that mean. Hence, I’m not that bothered if mobile learning actually is dead.

Ian Dolphin, executive director of the Apereo Foundation

We are a registered not for profit in the US, we have been described as an Apache Foundation for Education – that’s not quite right but gives an idea of what we do. We provide software including SAKAI, Xerte, and OpenCast (capturing and managing media at significant scale). But enough about us…

Next generation digital learning environment… Lots to say there but I will be focusing on a conversation that has opened up in the United States, and the relationship of that conversation to developing the discussion around Learning Analytics.

That conversation was started by Educause, which looked at the VLE – the benefits but also the drawbacks of being inflexible, of being very course or teacher-centred. And that work highlighted what a new VLE might want to look like – flexibility for different types of courses, that it should support collaboration across and between institutions, that it should support analytics for advising, and that this new environment should be a much more personal environment than what has gone before.

The analogy here perhaps is of Groundhog day. These are issues we have heard before over the last 10 years. But why do I think the environment is different now? Well, we are are more mature in our technology. We have gotten smarter and better at lightly working tools in and out of different environments. We are pragmatic about bringing functionality in pragmatically. And, lastly, we are starting to learn and develop a practical use of big data and learning analytics as a potential tool for personalisation.

I just want to pause to talk about academic analytics – about institutional trends, problems, etc. versus learner analytics – which are specific and personal, about interventions, retention etc. And we are already seeing some significant evidence about the effectiveness of learning analytics (see recent Bricks and Clicks report), with examples from the UK and US here. If one looks at the ends of the continuum here we are starting from prediction for retention intervention, but moving towards predictions for personalised learning.

There are several approaches to learning analytics at the moment. One is to buy in a system. We are taking a very different approach, developing a platform that uses various flexible components. That helps ensure data can move between systems, and that’s an issue Jisc has been raising – a national and international issue. And I think yesterday’s opening session was absolutely right about the importance of focusing on people, on humans. And if you look at the work Jisc has done, on ethical issues and informed consent, that is having an impact nationally and internationally.

We work with the society of analytics research. And there is a Solar analytics maturity framework. We have partnered with Solar and Jisc on our work and, to finish, I’d like to make a shameless plug for our Solar colleagues for LAK’16 which takes place in Edinburgh this summer.

Chrissi Nerantzi, principal lecturer in academic CPD, Manchester Metropolitan University

We saw a number of colleagues yesterday

 

10.30 – 11.15 #HullDtn: a collaborative approach to digital pedagogies

or:

10.30 – 11.30 New directions in open research

11.45 – 12.30 Introducing the UK research data discovery service

Christopher Brown, senior co-design manager, Jisc

13.30 – 14.30 Plenaries: the power of data

What can data mining the web tell us about our research?

Speaker: Euan Adie, CEO, Altmetric 

 

14.45 – 15.45 Responsible metrics for research

 

Share/Bookmark

Jisc Digifest 2016 – Day One LiveBlog

Today and tomorrow I am in Birmingham for Jisc Digifest 2016 which I’ll be liveblogging here. I’m particularly hear wearing my Jisc’s 50 most influential higher education (HE) professionals using social media hat, helping to share the event with the wider sector who aren’t able to be at the ICC.

There is also an online programme so, if you aren’t here in person, you can not only follow the tweets on #digifest16 and the various blogs, you can also view special online content here.

As usual, this is a liveblog so all corrections, additions, comments, etc. are very welcome. 

At the moment my expected schedule for day one (this may change) is:

09:30 – 10:45 Plenaries: the power of digital for change – Dr Paul Feldman, chief executive, Jisc; Professor David Maguire, chair, Jisc; Professor Andrew Harrison, professor of practice at University of Wales Trinity St David and director, Spaces That Work Ltd; Andrew’s talk is entitled creating great digital spaces for learning; Professor Donna Lanclos, associate professor for anthropological research,UNC Charlotte

11:15 – 12:00 Improving digital technology skills in FE: the CPD service – Sarah Dunne, senior co-design manager, Jisc; Clare Killen, consultant; Peter Chatterton, consultant; Georgia Hemings, co-design support officer, Jisc

12:30 – 13:15 Build your own university app in under an hour (sponsor session from Guidebook) – Justin Lamb, product specialist, Guidebook

14:15 – 15:45 Jisc’s investment in digital content for humanities: understanding the impact on research outcomes – Paola Marchionni, head of digital resources for teaching, learning and research, Jisc; Peter Findlay, digital portfolio manager, Jisc; Professor Eric T Meyer, senior research fellow, Oxford Internet Institute; Dr Kathryn Eccles, research fellow, Oxford Internet Institute.

Or:

14:15 – 15:45 Why educators can’t live without social media – Eric Stoller, higher education thought-leader, consultant, writer, and speaker

15:30 – 16:15 Working with students to make the most of digital – Sarah Knight, senior co-design manager, Jisc;  Dr Kerry Gough, senior lecturer in learning and teaching, Centre for Enhancement of Learning and Teaching (CELT), Birmingham City University; Jamie Morris, associate lecturer in student engagement, Centre for Enhancement of Learning and Teaching (CELT), Birmingham City University; Charlotte Creagh, manager of innovation and digital, Harlow College; Dave Monk, e-learning co-ordinator, Harlow College; Dani Campion, student, Birmingham City University; Charlotte Gough, student, Birmingham City University

16:45 – 17:45 The case for learning analytics – Phil Richards, chief innovation officer, Jisc; Michael Webb, director of technology and analytics, Jisc; Niall Sclater, learning analytics consultant

Share/Bookmark

BL Labs Roadshow 2016

1330  Introduction
Dr Beatrice Alex, Research Fellow at the School of Informatics, University of Edinburgh

1335 Doing digital research at the British Library Nora McGregor, Digital Curator at the British Library

The Digital Research Team is a cross-disciplinary mix of curators, researchers, librarians and programmers supporting the creation and innovative use of British Library’s digital collections. In this talk Nora will highlight how we work with those operating at the intersection of academic research, cultural heritage and technology to support new ways of exploring and accessing our collections through; getting content in digital form and online; collaborative projects; offering digital research support and guidance.

1405  British Library Labs
Mahendra Mahey, Project Manager of British Library Labs.

The British Library Labs project supports and inspires scholars to use the British Library’s incredible digital collections in exciting and innovative ways for their research, through various activities such as competitions, awards, events and projects.

Labs will highlight some of the work that they and others are doing around digital content in libraries and also talk about ways to encourage researchers to engage with the British Library. They will present information on the annual BL Labs Competition, which closes this year on 11th April 2016. Through the Competition, Labs encourages researchers to submit their important research question or creative idea which uses the British Library’s digital content and data. Two Competition winners then work in residence at the British Library for five months and then showcase the results of their work at the annual Labs Symposium in November 2016.

Labs will also discuss the annual BL Labs Awards which recognises outstanding work already completed, that has used the British Library’s digital collections and data. This year, the Awards will commend work in four key areas: Research, Artistic, Commercial and Teaching / Learning. The deadline for entering the BL Labs Awards this year is 5th September 2016.

1420  Overview projects that have used British Library’s Digital Content and data.
Ben O’Steen, Technical Lead of British Library Labs.

Labs will further present information on various projects such as the ‘Mechanical Curator’ and other interesting experiments using the British Library’s digital content and data.

1500 Coffee and networking

1530 BL Labs Awards: Research runner up project: “Palimpsest: Telling Edinburgh’s Stories with Maps�
Professor James Loxley, Palimpsest, University of Edinburgh

Palimpsest seeks to find new ways to present and explore Edinburgh’s literary cityscape, through interfaces showcasing extracts from a wide range of celebrated and lesser known narrative texts set in the city. In this talk, James will set out some of the project’s challenges, and some of the possibilities for the use of cultural data that it has helped to unearth.

1600 Geoparsing Historical Texts data
Dr Claire Grover, Senior Research Fellow, School of Informatics, University of Edinburgh

Claire will talk about work the Edinburgh Language Technology Group have been doing for Jisc on geoparsing historical texts such as the British Library’s Nineteenth Century Books and Early English Books Online Text Creation Partnership which is creating standardized, accurate XML/SGML encoded electronic text editions of early print books.

1630 Finish

Feedback for the event
Please complete the following feedback form.

Share/Bookmark

eLearning@ed/LTW Monthly Showcase #2: Open

Today we have our second eLearning@ed/LTW Showcase and Network event. I’m liveblogging so, as usual, corrections and updates are welcome. 
Jo Spiller is welcoming us along and introducing our first speaker…
Dr. Chris Harlow – “Using WordPress and Wikipedia in Undergraduate Medical & Honours Teaching: Creating outward facing OERs”
I’m just going to briefly tell you about some novel ways of teaching medical students and undergraduate biomedical students using WordPress and platforms like Wikipedia. So I will be talking about our use of WordPress websites in the MBChB curriculum. Then I’ll tell you about how we’ve used the same model in Reproductive Biology Honours. And then how we are using Wikipedia in Reproductive Biology courses.
We use WordPress websites in the MBChB curriculum during Year 2 student selected components. Students work in groups of 6 to 9 with a facilitator. They work with a provided WordPress template – the idea being that the focus is on the content rather than the look and feel. In the first semester the topics are chosen by the group’s facilitator. In semestor two the topics and facilitators are selected by the students.
So, looking at example websites you can see that the students have created rich websites, with content, appendices. It’s all produced online, marked online and assessed online. And once that has happened the sites are made available on the web as open educational resources that anyone can explore and use here: http://studentblogs.med.ed.ac.uk/
The students don’t have any problem at all building these websites and they create these wonderful resources that others can use.
In terms of assessing these resources there is a 50% group mark on the website by an independent marker, a 25% group mark on the website from a facilitator, and (at the students request) a 25% individual mark on student performance and contribution which is also given by the facilitator.
In terms of how we have used this model with Reproductive Biology Honours it is a similar idea. We have 4-6 students per group. This work counts for 30% of their Semester 1 course “Reproductive Systems” marks, and assessment is along the same lines as the MBChB. Again, we can view examples here (e.g. “The Quest for Artificial Gametes”. Worth noting that there is a maximum word count of 6000 words (excluding Appendices).
So, now onto the Wikipedia idea. This was something which Mark Wetton encouraged me to do. Students are often told not to use or rely on Wikipedia but, speaking a biomedical scientist, I use it all the time. You have to use it judiciously but it can be an invaluable tool for engaging with unfamiliar terminology or concepts.
The context for the Wikipedia work is that we have 29 Reproductive Biology Honours stduents (50% Biomedical Sciences, 50% intercalculating medics), and they are split into groups of 4-5 students/groups. We did this in Semester 1, week 1, as part of the core “Research Skills in Reproductive Biology”. And we benefited from expert staff including two Wikipedians in Residence (at different Scottish organisations), a librarian, and a learning, teaching and web colleague.
So the students had an introdution to Wikipedia, then some literature searching examples. We went on to groupwprl sesssions to find papers on particular topics, looking for differences in definitions, spellings, terminology. We discussed findings. This led onto groupwork where each group defined their own aspect to research. And from there they looked to create Wikipedia edits/pages.
The groups really valued trying out different library resources and search engines, and seeing the varying content that was returned by them.
The students then, in the following week, developed their Wikipedia editing skills so that they could combine their work into a new page for Neuroangiogenesis. Getting that online in an afternoon was increadibly exciting. And actually that page was high in the search rankings immediately. Looking at the traffic statistics that page seemed to be getting 3 hits per day – a lot more reads than the papers I’ve published!
So, we will run the exercise again with our new students. I’ve already identified some terms which are not already out there on Wikipedia. This time we’ll be looking to add to or improve High Grade Serious Carcinoma, and Fetal Programming. But we have further terms that need more work.
Q&A
Q1) Did anyone edit the page after the students were finished?
A1) A number of small corrections and one querying of whether a PhD thesis was a suitable reference – whether a primary or secondary reference. What needs done more than anything else is building more links into that page from other pages.
Q2) With the WordPress blogs you presumably want some QA as these are becoming OERs. What would happen if a project got, say, a low C.
A2) Happily that hasn’t happened yet. That would be down to the tutor I think… But I think people would be quite forgiving of undergraduate work, which it is clearly presented at.
Q3) Did you consider peer marking?
A3) An interesting question. Students are concerned that there are peers in their groups who do not contribute equally, or let peers carry them.
Comment) There is a tool called PeerAim where peer input weights the marks of students.
Q3) Do all of those blog projects have the same model? I’m sure I saw something on peer marking?
A3) There is peer feedback but not peer marking at present.
Dr. Anouk Lang – “Structuring Data in the Humanities Classroom: Mapping literary texts using open geodata”
I am a digital humanities scholar in the school of Languages and Linguistics. One of the courses I teach is digital humanities for literature, which is a lovely class and I’m going to talk about projects in that course.
The first MSc project the students looked at was to explore Robert Louis Stevenson’s The Dynamiter. Although we were mapping the texts but the key aim was to understand who wrote what part of the text.
So the reason we use mapping in this course is because these are brilliant analytical students but they are not used to working with structured data, and this is an opportunity to do this. So, using CartoDB – a brilliant tool that will draw data from Google Sheets – they needed to identify locations in the text but I also asked students to give texts an “emotion rating”. That is a rating of intensity of emotion based on the work of Ian Gregory – spatial historian who has worked with Lakes data on the emotional intensity of these texts.
So, the students build this database by hand. And then loaded into CartoDB you get all sorts of nice ways to visualise the data. So, looking at a map of London you can see where the story occurs. The Dynamiter is a very weird text with a central story in London but side stories about the planting of bombs, which is kind of played as comedy. The view I’m showing here is a heatmap. So for this text you can see the scope of the text. Robert Louis Stevenson was British, but his wife was American, and you see that this book brings in American references, including unexpected places like Utah.
So, within CartoDB you can try different ways to display your data. You can view a “Torque Map” that shows chronology of mentions – for this text, which is a short story, that isn’t the most helpful perhaps.
Now we do get issues of anachronisms. OpenStreetMap – on which CartoDB is based – is a contemporary map and the geography and locations on the map changes over time. And so another open data source was hugely useful in this project. Over at the National Library of Scotland there is a wonderful maps librarian called Chris Fleet who has made huge numbers of historical maps available not only as scanned images but as map tiles through a Historical Open Maps API, so you can zoom into detailed historical maps. That means that mapping a text from, say, the late 19th Century, it’s incredibly useful to view a contemporaneous map with the text.
You can view the Robert Louis Stevenson map here: http://edin.ac/20ooW0s.
So, moving to this year’s project… We have been looking at Jean Rhys. Rhys was a white Creole born in the Dominican Republic who lived mainly in Europe. She is a really located author with place important to her work. For this project, rather than hand coding texts, I used the wonderful wonderful Edinburgh Geoparser (https://www.ltg.ed.ac.uk/software/geoparser/??) – a tool I recommend and a new version is imminent from Clare Grover and colleagues in LTG, Informatics.
So, the Geoparser goes through the text and picks out text that looks like places, then tells you which it things is the most likely location for that place – based on aspects like nearby words in the text etc. That produces XML and Clare has created me an XSLT Stylesheet, so all the students have had to do is to manually clean up that data. The GeoParser gives you GeoNames reference that enables you to check latitude and longitude. Now this sort of data cleaning, the concept of gazeteers, these are bread and butter tools of the digital humanities. These are tools which are very unfamiliar to many of us working in the humanities. This is open, shared, and the opposite of the scholar secretly working in the librarian.
We do websites in class to benefit from that publicness – and the meaning of public scholarship. When students are doing work in public they really rise to the challenge. They know it will connect to their real world identities. I insist students sow their name, their information, their image because this is part of their digital scholarly identities. I want people who Google them to find this lovely site with it’s scholarship.
So, for our Jean Rhys work I will show you a mock up preview of our data. One of the great things about visualising your data in these ways is that you can spot errors in your data. So, for instance, checking a point in Canada we see that the Geoparser has picked Halifax Nova Scotia when the text indicates Halifax in England. When I raised this issue in class today the student got a wee bit embarrassed and made immediate changes… Which again is kind of perk of work in public.
Next week my students will be trying out QGIS  with Tom Armitage of EDINA, that’s a full on GIS system so that will be really exciting.
For me there are real pedagogical benefits of these tools. Students have to really think hard about structuring their data, which is really important. As humanists we have to put our data in our work into computational form. Taking this kind of class means they are more questioning of data, of what it means, of what accuracy is. They are critically engaged with data and they are prepared to collaborate in a gentle kind of way. They also get to think about place in a literary sense, in a way they haven’t before.
We like to think that we have it all figured out in terms of understanding place in literature. But when you put a text into a spreadsheet you really have to understand what is being said about place in a whole different way than a close reading. So, if you take a sentence like: “He found them a hotel in Rue Lamartine, near Gard du Nord, in Monmatre”. Is that one location or three? The Edinburgh GeoParser maps two points but not Rue Lamartine… So you have to use Google maps for that… And is the accuracy correct. And you have to discuss if those two map points are distorting. The discussion there is more rich than any other discussion you would have around close reading. We are so confident about close readings… We assume it as a research method… This is a different way to close read… To shoe horn into a different structure.
So, I really like Michel De Certeau’s “Spatial stories” in The practice of everyday life (De Certeau 1984), where he talks about structured space and the ambiguous realities of use and engagement in that space. And that’s what that Rue LaMartine type example is all about.
Q&A
Q1) What about looking at distance between points, how length of discussion varies in comparison to real distance
A1) That’s an interesting thing. And that CartoDB Torque display is crude but exciting to me – a great way to explore that sort of question.
OER as Assessment – Stuart Nichol, LTW
I’m going to be talking about OER as Assessment from a students perspective. I study part time on the MSc in Digital Education and a few years ago I took a module called Digital Futures for Learning, a course co-created by participants and where assessment is built around developing an Open Educational Resource. The purpose is to “facilitate learning for the whole group”. This requires a pedagogical approach (to running the module) which is quite structured to enable that flexibility.
So, for this course, the assessment structure is 30% position paper (basis of content for the OER), then 40% of mark for the OER (30%peer-assessed and tutor moderated / 10% self assessed), and then the final 30% of the marks come from an analysis paper that reflects on the peer assessment. You could then resubmit the OER along with that paper reflecting on that process.
I took this module a few years ago, before the University’s adoption of an open educational resource policy, but I was really interested in this. So I ended up building a course on Open Accrediation, and Open Badges, using weebly: http://openaccreditation.weebly.com/.
This was really useful as a route to learn about Open Educational Resources generally but that artefact has also become part of my professional portfolio now. It’s a really different type of assignment and experience. And, looking at my stats from this site I can see it is still in use, still getting hits. And Hamish (Macleod) points to that course in his Game Based Learning module now. My contact information is on that site and I get tweets and feedback about the resource which is great. It is such a different experience to the traditional essay type idea. And, as a learning technologist, this was quite an authentic experience. The course structure and process felt like professional practice.
This type of process, and use of open assessment, is in use elsewhere. In Geosciences there are undergraduate students working with local schools and preparing open educational resources around that. There are other courses too. We support that with advice on copyright and licensing. There are also real opportunities for this in the SLICCs (Student Led Individually Created Courses). If you are considering going down this route then there is support at the University from the IS OER Service – we have a workshop at KB on 3rd March. We also have the new Open.Ed website, about Open Educational Resources which has information on workshops, guidance, and showcases of University work as well as blogs from practitioners. And we now have an approved OER policy for learning and teaching.
In that new OER Policy and how that relates to assessment, and we are clear that OERs are created by both staff and students.
And finally, fresh from the ILW Editathon this week, Ewan MacAndrew, our new Wikimedian in residence, will introduce us to Histropedia (Interactive timelines for Wikipedia: http://histropedia.com) and run through a practical introduction to Wikipedia editing.

Share/Bookmark

EdinburghApps Event LiveBlog

This afternoon I’ve popped in to see the presentations from this weekend’s EdinburghApps event, being held at the University of Edinburgh Informatics Forum. As usual for my liveblogs, all comments and edits are very much welcomed. 

EdinburghApps, which also ran in 2014, is a programme of events organised by Edinburgh City Council (with various partners) and generating ideas and technology projects to address key social challenges. This year’s events are themed around health and social care (which have recently been brought together in Scotland under the Public Bodies Joint Working Bill for Health and Social Care Integration).

Unfortunately I wasn’t able to be part of the full weekend but this presentation session will involve participants presenting the projects they have been coming up with, addressing health and social care challenges around five themes (click to see a poster outlining the challenge):

And so, over to the various teams (whose names I don’t have but who I’m quite sure the EdinburghApps team will be highlighting on their blog in the coming weeks!)…

Meet Up and Eat Up

This is Ella, an International Student at UoE. Meets people at events but wants to grow her network. She sees a poster for a “Meet Up and Eat Up” event, advertising food and drinks events for students to get together. She creates a profile, including allergies/preferences. She chooses whether to attend or host a meal. She picks a meal to attend, selects a course to bring, and shares what she will bring. She hits select and books a place at the meal…

So on the night of the meal everyone brings a course… (cue some adorable demonstration). And there is discussion, sharing of recipes (facilitated by the app), sharing of images, hashtags etc… Ratings within the app (also adorably demonstrated).

So, Ella shares her meal, she shares the recipe in the app…

The Meet Up and Eat Up team demonstrate their app idea.

The Meet Up and Eat Up team demonstrate their app idea.

Q&A

Q) Just marketed to students or other lonely people?

A) Mainly at students, and international students in particular as we think they are particularly looking for those connections, especially around holidays. But we’d want more mixing there, might put it into freshers week packs, introductory stuff…We might need to also arrange some initial meals to make this less intimidating… maybe even a Freshers week(s) event – there are five universities in town so opportunity to have mixing across those groups of students.

Game of Walks

Our challenge was to encourage walking to school so our audience was children, parents but also schools. We have turned our challenge into Game of Walks…

So, we’d find some maps of good walks to schools, routes that are longer but also safe… And along the route there would be sensors and, as you walk past, an image – appropriate to a theme in the curriculum – would appear on the pavement… So the kid will be a team and looks for an image appropriate for their team (e.g. sharks vs jellyfish).

Now, when we tested this out we discovered that kids cheat! And may try to rescan/gather the same thing. So it will randomly change to avoid that. And each week the theme will change…

So, there is also a tech angle here… We would have a wide field sensor – to trigger the device – and a narrow field sensor would enable the capturing of the thing on the walk… So that’s arduino operated. And you’d have 3D printed templates for the shape you need – which kids could print at school – so you’d just need a wee garden ornament type thing to trigger it. And once a week the kids would gather that data and see who won…

 

The Game of Walks team demo their idea for gamified school walks.

The Game of Walks team demo their idea for gamified school walks.

 

Q&A

Q1) How expensive will these be?

A1) Tried to pick sensors and devices that are cheap and cheerful. Arduino nanos are very inexpensive. LEDs probably more expensive… But keep it cheap, so if vandalised or stolen you can either repair or deal with loss.

Q2) How would you select the locations for the sensors… ?

A2) We thought we’d get parents and schools to select those… Encourage longer routes… The device will have that badge until collected… If lots of kids in the same place there’ll be a constant procession which could be tricky… Want, in a zone around the school, where you’d have smaller groups this would trigger.

Q3) Who programmes the Arduino

A3) Lots of schools teach Arduino, so could get the kids involved in this too, also the shapes, the data collection and users. And you will have footfall data as part of that capture which would also be interesting… Maybe get kids involved in potentially moving the sensors to new places because of lots/not enough footfall…

Comment) I think that’s exciting, getting the kids involved in that way…

Team Big Data

Note: this is almost certainly not their name, but they didn’t share their team name in their presentation.

So, I’m a user for our system… My mum has just recovered from cancer and I’m quite concerned about my own risk… So my friend suggested a new app to find out more… So I enter my data… And, based on a bigger data set my risks are calculated. And as a user I’m presented with an option for more information and tips on how to change… The database/system offers a suggestion of how to improve his practice… And maybe you reject some suggestions, so receive alternative ideas… And the app reminds you… In case you forget to cut back on your sausages… And based on those triggers and reminders you might update your personal data and risk… And the user is asked for feedback – and hopefully improves what they do…

Team Big Data demo their idea for an app nudging good health and personal care through an app and big data risk/suggestion database.

Team Big Data demo their idea for an app nudging good health and personal care through an app and big data risk/suggestion database.

Q&A

Q1) What stuff is going to be worked on… What would be held?

A1) We did a demonstration with a computer sharing all of your data in one place… It’s currently in lots of different places… We did a few simple designs that holds all the data of the users… Not trying to be the big brothers… We presented the user experience… But not so much the behind the scenes stuff…

Q2) How does the app know about the beer count? (part of the demo)

A2) We demonstrated this as an app but it could be a website, or something else… You can perhaps get that data based on purchase history etc. The user doesn’t have to do anything extra here, its using existing data in different places. Also people often share this stuff on Facebook.

Comment) You have tackled a really difficult problem… You’ve made a good start on this… It’s such a massive behavioural change to do…

Comment) Many people are happy to volunteer data already…

Q3) How do you convince Tesco to share data with this app?

A3) I think you’d need to have an agreement between NHS and Tesco… For a new form of membership where you opt into that sharing of data.

Comment) Might be a way to encourage people to sign up for a ClubCard, if there was a benefit for accuracy and advice in the app.

A3) Maybe also there are discounts that

Comment) Maybe bank cards is a better way to do that. So there may be a way to join up with those organisations looking at being able to link up with some of these…

A3) This idea isn’t any kind of competition… Might give you ideas about data access…

Comment) I was just wanting to raise the issue that if you were working with, e.g. Tesco, you’d need to also get data from other large and small companies and working with one company may put others off working for you – incentivising users to, e.g. get a ClubCard, isn’t going to incentivise, say, Sainsbury’s to work with you with the data they hold. There are also data protection issues here that are too complex/big to get into.

Simply SMS

Note: this is a charming father/son team including our youngest participant, a boy named Archie who seems to be around 9 or 10 years old (and is clearly a bit of a star).

So this is an app to help people with cognitive impairments to engage and communicate with the younger generation. Maybe a teen, Billy Boy, wants to help out his Grandad, who has had a stroke… So Grandad has an app, and Billy Boy has a reciprocal App. They have slightly different versions.. And they can exchange pictograms… Billy Boy can prompt Grandad to brush their teeth, or do other things to keep in touch and check in… Grandad can ask Billy Boy how he’s doing…

The Simply SMS team demo their idea for an app connecting lonely people across generations through pictogram messages.

The Simply SMS team demo their idea for an app connecting lonely people across generations through pictogram messages.

Q&A

Q1) How do you get this working over SMS?

A1) Would actually be messaging system, which could use words as well as pictures… Perhaps as time goes on you could change it so different people with different cognitive impairments could use it – e.g. number of stars so you could indicate how well you were eating. Also there would be some messaging between, say, carer, homehelp, relatives etc. So that all of those engaged in care can share updates, e.g. that Grandad has been taken to hospital…

Q2) What do you want to do next?

A2) We were looking at Meteor that lets you chain server, iPhone and Android apps together and they have a really nice chat room style system, for public or private chat rooms. So we would look to create plugins for that for pictograms and the right sort of mix of public and private messages. And bring together people involved based on the care package that person has.

Q3) Can this be done so that Billy Boyd can use his existing messaging apps could tie into that?

A3) It may be that there are ways to do that. Often there are things to integrate things together… Tools to post to multiple sites at once, so could maybe use that…

Q4) Could you compare our big data approach to yours?

A4) This isn’t really big data. The intelligence isn’t really in the application, it’s in the people who are involved in the care and using the apps who have the intelligence.

Q5) Do you think people would be able to learn these sorts of pictograms?

A5) We’d have to see… But there are some simple things you can do – like the stars. But people retiring now include those used to working with technology… So pensioners are getting more adept at these things. People will adopt new technology.

Q5) Have you heard of a thing called Talking Mats. It’s a communication tool for people with dementia using pictures. Would be good to look into that, and how that could fit together.

A5) There are lots of things out there… Doing parts of this. And part of this idea is about getting teenagers involved too.

Q6) How about animated gifs?

A6) Lots of the development would be about what people actually need to know… Have a friend who calls to check her ageing relative has had a shave, or what they did today.

Comment) One nice next step might be to test out that pictogram language, see if they find that works, including teenagers and older people…

A) Debating what a bank or a school or shop might look like, for instance…

Closing Comments – Keira (We Are Snook) and Sally Kerr (Edinburgh City Council)

Keira: We have so many new ideas, and we started yesterday with our challenges but nothing else. Obviously a two day hack has its limitations… It’s not the way to get things perfect. But we have the opportunity now to come together again in a few weeks time (27th Feb)

Sally: So our next event is here (University of Edinburgh Informatics Forum) as well, on Saturday 27th February. Then after that midway event there will be pitch session on Sunday 13th March. We’ll contact you all, share information on the blog, get challenge owners on the blog… And get you to the next stage.

Keira (We Are Snook): So I’m going to hand out a wee plan for the next few weeks so that you can get your ideas ready, the milestones for your journey, who the key actors are, who will do what. You should have left team outlines to me, and forms that will help us share your ideas with others too. And we’d welcome your feedback on the event as well. And finally I have one of our Snook plywood phones for Archie (our very youngest participant at around 10) for prototyping lots of app ideas!

And with that, the day was done – although conversations continued over coffee and KitKats. A really interesting set of ideas though, and I’m told there is another team who will be along at the next sessions but weren’t able to make the show and tell today. I would recommend keeping an eye on the EdinburghApps website or @EdinburghApps on Twitter for more updates. I’ll certainly be eager to find out if we (my colleagues at EDINA and I) can offer any technical help as some of these ideas progress further. 

Related links

Share/Bookmark

LTW + eLearning@ed Monthly Meet Up #1 – Jan 2016 Liveblog

These notes were taken live at the first Learning Teaching and Web Services and eLearning@ed joint Monthly Meet Up, which took place at Appleton Tower on 28th January 2016. The definitive version can be found on the elearning@ed wiki, where you’ll also find related resources. As these were live notes the normal caveats apply and comments, corrections, etc. are very much welcomed.

Jo Spiller – Introductions

Welcome to our first Monthly Showcase and Networking session, which will be around five key areas here.

A few things coming up that may be of interest. We have the soft launch of MediaHopper as of 21st Jan. We also have the launched of Open.Ed showcasing OER best practice on 4th February. And we also have OER Workshops on 3rd March in Central area, 4th May in Kings Buildings.

Innovative Learning Week runs 15th-19th February with loads of events including a Wikipedia Editathon, Photogrammatry on 16th Feb, and Plotting the Campus on 17th Feb. We also have Learning Technology Fairs – School of Geosciences (15th Feb); ECA on 22nd March.

Marketing ODL

Dissertations at a Distance & eLearning@ed

Prof Jonathan Rees – Using video in the clinical medical curriculum. What are we learning?

I’m going to talk to you about what the challenges are in the medical school. In clinical medicine we work on a “Carousel” model. There are 18 carousels, each lasting 2 weeks, over 40 weeks each year. 15 students per carousel. 14 hours of tutorial each week, and 30 hours of clinical observation. Each student engage with around 8-10  staff. You have 3 hours of lectures, spaced up to 3 months away from the carousel. So, that’s not a system you’d necessarily design so there are problems to solve…

And we’ve made a video here to show you how we addressing some of those challenges. This video addresses key concepts and introductions to material they will see in the course. So, essentially we’ve been trying to use videos to overcome some of these challenges. Many of our students don’t know who some of our staff are here – which means that a challenge for our modules is to put a face to the name, to make this course personal, to make those connections to the people in charge of their teaching.

People did use video when I was a student… But they work very well for procedures. We want to put some things online partly as students are based throughout the region, and that means it’s available close to when they need it. In some ways our course structure is not linear. Some of our material in year 4, is the just in time learning for year 5. One of the interesting things about videos is you get to see what other people are doing and thinking!

Q1: How do students respond to them?

A1: they look at them, we get told if they do’t work. They say that they like them and request them.

Q2: Now that staff are more recognisable does that change anything?

A2: We only started doing this in September properly, but too early to say.

Q3: You did something interesting on quality of iPhone recording and mic.

A3: One of the talking head ideas was to get students to know who the module leaders are, to make those connections… If you have to cross town to do things it can be a nightmare… The phone is good enough to create short content, timely content when needed. Even cheap mics in a good room are amazing.

Q4: Do you have a limit on videos to keep them short or is it any length?

A4: Some are 2 and a half minutes, which works great. We try to keep them under 5 mins or around 5 mins.

Q5: Are they scripted?

A5: No. The talking head ones we are still learning how to do that… No scripting but sometimes two or three takes to get the right version.

Q6: Editing can take the time, how have you managed this?

A6: In theory there will be a system in the college. Right now we can edit, it’s not great. But generally we try to do everything in one take… With maybe a stop and restart. But we try to avoid too much editing.

Comment: I do a few online sound clips with a PowerPoint… I find I have to do it twice… Run once with timer, then second go I capture it.

A6: I’m still learning… The more we do it, the better we’ll get at it… We’ll get used to doing it.

Imogen Scott – Creating high-qualiy media for teaching (advice from MOOCland)

I’m talking here about video for a much wider audience. You would’t always invest this much time and work for a video for a small group etc. I work in the Media Production Team, with my colleagues Lucy, Tim, Nichol, Kara, Andy and me. We create media for MOOCs and I’m going to draw on a couple of examples here, particularly from our Andy Warhol MOOC…

Imogen is playing a video from our Warhol MOOC.

So in that clip we had some locations – an art studio (not Warhol’s!), and he also found some Warhol images that we could use online. Now that is a very tricky thing to do… It was only possible because of our lecturer, Glyn’s involvement in a large scale research collaboration, and that brought it’s own challenges.

The Warhol course was 5 weeks long with a lot of video content each week. We had multiple stakeholders: Tate, Artist Rooms, Arts Council, National Galleries of Scotland. And they needed to negotiate rights etc.

By contrast we also made the Nudgeit: Understanding Obesity course, a 5 week course, 3 hours per week learner effort, 35 mins per week video content. This was all content created by the team. We used teaching spaces, we used the anatomy museum, and they created their own resources for the course – interpretations of data, visuals, etc. And they documented that process for the course.

We also did Mental Health: A Global Priority. This was done mainly with audio materials as this was designed to be used in the developing world and audio means much smaller downloads. And it also enabled anonymity for some participants, particularly important given some of the interviewees discussing mental health. (We are now hearing audio from the course.)

This course was quicker to source – no locations needed, minimal visual content. But it took a long time as the challenge was both the location and time zones of participants and partners, as well as the less reliable internet connections in some locations. We had plenty of time but only just got this completed when we needed to.

So, if you are thinking of creating video or audio. When you are putting together ideas we strongly advise creating a video script. That helps you finalise the words, but also to think about the visuals (which may be a talking head, but may be many other things). Think about what you want to say, look at other videos to think about visual aspects. Source images from creative commons, take your own images… And sometimes if you have an abstract concept to describe think about how you might do that…

You also want to think about what you want to call your video and how long it would be – we try to keep videos under 6 minutes. For Philosophy and the Sciences we filmed in a really lovely library… That looked good and let us do separate takes and do cutaways as part of the visuals.

If you do grab creative commons images do keep track of your sources. You can use our spreadsheet if you want to – capture source, source link, etc. And that means you can license your own work openly if you want to. You can’t always do that but when you do you want to provide a license, evidence any research used, evidence any source materials used.

For scheduling a production you need to think about equipment, location, contributors, script, images or other source material, licenses for these, and time to create transcripts.

Q1: Is there a university transcription service?

A1: We outsource at present. We think that there may be some opportunity to do this in house.

Comment: If there is a need here then it would be really useful to gather evidence of that need.

Ross: There is also some discussion from the Web Publishers Clinic around this too which I’ll share.

Comment: And Informatics has masters students working on automated transcriptions.

Imogen: The timescales here tends to be 6-8 months – including emails and preparation etc. More collaborators can mean that it takes longer. For about half an hour of video content you need to allow 1-2 days to record that, and then about a week or more for editing. Editing is where a lot of the creativity happens.

We have a webpage that lists our DIY media kit for hire. We also have our attributions spreadsheet template, and Creative Commons attribution guidance.

Q2: Have you found that you are required to put any of the people you record through media training? Is that something you advise?

A2: We tend not to advise that. It’s geared towards giving an interview on the news. For course materials it’s a different style – and being comfortable with the material and the setting. In some ways the MOOC production timeline is getting used to creating video. Every team we get is new to this… You try it and you learn it…

Q2: One thing from the previous speaker is that people seemed very natural…

Comment: But that’s a second or third take thing… The first take isn’t likely to have been as natural.

Imogen: And you get used to that experience anyway, you become more natural on camera.

We are now watching the Edinburgh MOOCs showreel… 

Prof Clive Greated – Use of video and sound in fluid mechanics and acoustics teaching

I have been teaching fluid mechanics at Edinburgh since the 1970s but back a while ago I began getting involved in teaching acoustics and becoming interested in sound. And one of the things that I created for this course were a series of podcasts of different instruments and although I stopped teaching the acoustics course ages ago I happened to mention that I had these. Now maybe 5 years back I was asked to take over a third year fluid mechanics students, and I wanted to use that idea of podcasts, or something similar, to bring out the practical aspects of engineering.

So my idea was to go into the field and look at real engineering sites, so students had a feel for the kind of realities of a real system. A large section of my course is on turbines, used in hydrostations etc. It’s quite difficult to visualise those for the students… But I wanted to encourage students to go take a look at real systems as there are 100s in Scotland. (We are now watching a video on hydroelectric systems). The videos are about 3 minutes long. I’ve made 50-60 of these. Some are a bit longer – one on the physics and astronomy department are 30 minutes long.

So, I’ve taken the various topics and made videos around that… One of the topics is waves and wave power, and Scotland had the first wave turbines attached to the grid, so again just giving students a view of what that looks like in practice. (Watching a wave turbine video now, showing a decommissioned turbine to explain the working).

Again, I have another clip and then I’ll share some reflections on using these. Now, another topic is high speed flows and super sonic flight. We have the museum of flight just up the road so I made just a short clip about that (now watching this, which discusses the power and inefficiency of Concorde).

So for all of these I’ve tried to get real examples for students. And I just want to talk briefly on practicalities. You’ll see that in some of those videos I’m in the video… Sound recording is absolutely crucial – you have to monitor that really carefully. So you need a camera with proper sound facilities, XSLR inputs etc. And in most of these videos you have voice over… A very useful facility in the University is an anechoic chamber. You really need that sort of soundproofed space to do audio for video recordings. There is a small semi-anechoic space in Informatics. The high quality space is also available to use in Kings Buildings – you need to call to book it but that can be done.

In terms of audio, many of our students listen to recordings through iPads/iPhone and that’s an opportunity to record in binaural sound (now watching a video with binaural sound of a wave tank). In fact the first recording I made of the wave tank – recorded in slow motion and with binaural audio from the sea – had over 750k hits on YouTube.

I have found a real interest from students in this which I’m really pleased about. It is really good to incorporate the sound and the video. I’m an actually retired, but still teaching (full time!) so probably have more time than most.

Q1: I hope you’ve been nominated for teaching awards?

A1: I have been nominated every year, and students always cite that material as being helpful.

Q2: How have the rest of the faculty responded?

A2: I haven’t had a huge response. I have Video PremierPro editing on my machine, but I basically do this all myself.

Q3: Did you have a challenge getting people to be natural on camera?

A3: I have to confess my wife is my sound recordist – I drag her around Scotland.

Q4: How do you get to film on location – do you just call people up?

A4: Yes. My next film is in Orkney with Scot Renewables and that’s going to be the largest tidal generator in the world. We’ve already been to Harland and Wolf in Belfast, where it is being constructed so there’ll be that full lifecycle. People are keen to be in videos. You have to ask people, but they are generally happy to take part. It may be that for some commercial stuff there might be concern, but generally this is fine. People are quite up for that.

Q5: Are these openly on YouTube?

A5: I think they will be on the Open.ed website. And will be available there. So I have changed all the licenses ready.

Hands On MediaHopper Session – Stephen Donnelly and Mark Jennings

We are going to quickly show you how how to login to MediaHopper and download the CapturEd software. (Demo taking place).

 

 

Share/Bookmark

Social Media for Learning in Higher Education 2015 (#SocMedHE15) Conference – LiveBlog

Today I’m here at Sheffield Hallam University today for Social Media for Learning in Higher Education 2015 (follow #SocMedHE15) where myself and Louise Connelly (from UoE Royal (Dick) Veterinary School) will be presenting some of our Managing Your Digital Footprint research later today.

I’ll be liveblogging but, as the wifi is a little variable, there may be a slight delay in these posts. As usual, as this is a liveblog,

Welcome

At the moment we are being welcomed to the day by Sheffield Hallam’s Pro Vice Chancellor who is welcoming us to the day and highlighting that there are 55 papers from 38 HEIs. The hope is that today will generate new conversations and communities, and for those to keep going – and the University is planning to run the conference again next year.

Keynote by Eric Stoller

We are starting with a very heavily Star Wars themed video introducing Eric and his talk….

When he thinks about his day it has no clear pattern, and includes a lot of watching videos, exploring what others are doing… And I’m a big fan on Twitter polls (he polls the room – a fair few of us use them) and when you poll people about how universities are using social media we are seeing use for marketing and communications, teaching and learning, a whole range of activities…

There are such a range of channels out there… Snapchat, how many of you are Snapchatters? (fair few) and how many of you take screen shots? How about Reddit… yeah, there are a few of us, usually the nerdy folk… YikYak… I’m avoiding that to avoid Star Wars spoilers right now… Lots of sites out there…

And now what we say online matters. That is game changing… We have conversations in this auditorium and that doesn’t get shared beyond the room… But online our comments reaches out beyond this room… And that can be where we get into trouble around our digital identity. We can really thank Marc Prensky for really messing things up here with his Digital Natives idea… Dave White brilliantly responded to that, though few seemed to read it!

But there are some key issues here. Social media blurs professional and personal identities…

My dad was checking out Facebook but he’s not on Facebook, he was using my mothers account… My parents have given me a range of interesting examples of people blurring between different spaces… So my mom added me on Facebook.. Is she my friend? I think she has a different designation. I got on there and she already had 8 friends – how did they get there first? Anyway she is experiencing Facebook in a way that I haven’t for years… My mom joined Facebook in 2014 (“I wanted to make sure it wasn’t a fad”) and when you have 8 friends you truly see everything… She sees people that she doesn’t know making fun of, saying snarky things to, her child (me)… We’ve never really had a space where we have that blurring of people. So, my mom hops into a comment thread to defend me… And then people make fun of her… So I have to defend her… We haven’t really adapted and evolved our ways of being professional, of managing relationships for this space yet.

One thing we haven’t come to terms with is the idea of leadership in social media. No matter who you are you can educate, promote, etc. One of my favourite leaders on social media is in the US, president of the University of Cincinnati (@PrezOno). He has a lot of followers and engagement. Typically if your academics, your leaders, are using social media and sharing their work and insights, that says a lot about the organisational culture you are trying to build and encourage.

When you are thinking about employability (and man, you can’t miss this University’s employability office)… It’s about personal brand – what you post and say matters… It’s being human.

Facebook has been around 11 years now, it’s massive… There are over 1 billion users… In fact in September there were over 1 billion in a single day. But people don’t use it in the same ways they did previously… Look at institutions with an older cohort age then Facebook is where it’s at.

I have this quote from the University of Edinburgh’s Managing Your Digital Footprint account that 90% of bosses use Facebook to vet candidates… Which is potentially an issue… As students don’t always post that carefully or with an awareness of how their comments appear later on…

As a consultant I tell people not to fall in love with one platform, but I’m a little in love with Twitter. And there are really interesting things taking place there. We have things like #LTHEchat – a discussion of technology in education. And this is a space where comments are kind of preserved… But that can include silly comments, things we don’t want to stick around. And I love when universities connect students to alumni… We have to think about criticality and digital literacy in these spaces too…

Different spaces also work for different uses… Some love Vine, those 6 second videos. And when we think about teaching we want to talk about story telling some of the YouTube vloggers are a create place to learn about creating narrative and story. So, for instance, Casey Neilson, a vlogger who has also directed commercials for brands like Nike, is a great person to watch. For example his video on Haters and Losers… [we are now watching videos]

How many of you are on LinkedIn? [we mostly are] I assume those not on LinkedIn don’t have a job… There is huge amounts of useful stuff on there, including organisational pages… But it doesn’t always have a great reputation [shows a meme about adding you as a connection]. This is a space where we get our recommendations, our endorsements. Right now LinkedIn is a powerful place. LinkedIn is the only major social media site where there are more users ages 30-49 than 18-29 year olds [stat from Pew Research]. How many here work in employability or careers? You get that thing where students only approach you 5 minutes before they leave… They should really be getting on LinkedIn earlier. People can be weird about adding their students – it’s not about adding your students as friends, its an opportunity to recommend and support each other – much better there than Rate My Professor.

I wanted to show this tweet from the Association of Colleges that “soft skills should be called human skills. Soft makes it sound inferior, which we all know they’re not”. Those soft skills are part of what we do with social media…

When I moved to the UK – my wife got a promotion – and I, as a consultant, had all my networks in the US… But I also had social media contacts in the UK… And I was able to use LinkedIn groups, connections, etc. to build relationships in the UK, to find my way into the higher education sector here. I was talking to a LinkedIn rep last week at Princeton… What do you think the number one activity is on LinkedIn? It’s lurking… And I did a lot of strategic lurking…

So, we have these new spaces but we also have some older online spaces to remember…. So, for instance, what happens when you Google yourself? And that’s important to do… Part of what students are doing when they build up their profile online is to be searchable… To have great presence there.

And email still matters. How many of you love email? [one does] And how many of us have checked email today? [pretty much all]. We are all professional email checkers in a way… Email works if we do it right… But we don’t. We send huge long messages, we reply all to unsubscribe… It’s not surprising if students don’t get that [cue a tweet that shows an email tactically bearing a subject line about free football tix miraculously was received by students].

How many of you are concerned about privacy on social media? It’s always a huge concern. We have spaces like Snapchat – ephemeral except some of you take screen shots – and Yik Yak. We’ve already had issues with Yik Yak – a lecturer walked out when she saw horrible things people were posting about here… But Yik Yak tends to be sex and drugs and Netflix… Also a lot of revision…

And we have Periscope. Twitter owns it now, so who knows where that will go… It’s a powerful tool to have… You can livestream video from anywhere, which used to be hugely difficult and expensive. And you get comments and discussion.

And you don’t need to always do social media by posting, there is so much to listen and learn from…

The student experience is holistic. Social media, just like it blurs personal and professional selves, the same thing happens with teaching and learning and higher education. There are not separate entities in an organisation now… academic advising, careers services, induction/orientation, first year success, mental health/wellness…. So much learning happens in this space, and it’s not necessarily formal…

There is no such thing as a digital native… there are people learning and trying things…

So, now, some Q&A.

Q&A

Q1) When you see lecturers named on YikYak… Can you really just ignore it?

A1) On YikYak the community can downvote unpleasant bad things. In the US a threat can be prosecuted [also in the UK, where hate speech laws also apply]. But if I say something insulting it’s not necessarily illegal… It’s just nasty… You get seasonal trolling – exam time, venting… But we have to crack the nut about why people are doing and saying this stuff… It’s not new, the app just lets us see it. So you can downvote. You can comment (positively). We saw that with Twitter, and we still see that on Twitter. People writing on pointed issues still get a lot of abuse… Hate speech, bullying, it’s not new… it’s bigger than social media… It’s just reflected by social media.

Q2) On the conference hashtag people are concerned about going into the open spaces… and particularly the ads in these spaces…

A2) I am a big fan of adblock in Chrome. But until this stuff becomes a public utility, we have to use the tools that have scale and work the best. There are tools that try to be Facebook and Twitter without the ads… It’s like telling people to leave a party and go to an empty room… But if you use Google you are being sold… I have so much commercial branded stuff around me. When our communications are being sold… That gets messy… Instagram a while back wanted to own all the photos shared but there was a revolt from photographers and they had to go back on that… The community changed that. And you have to block those who do try to use you or take advantage (e.g. generating an ad that says Eric likes University of Pheonix, you should too… ).

Q3) I find social media makes me anxious, there are so many issues and concerns here…

A3) I think we are in a world where we need discipline about not checking our phone in the middle of the night… Don’t let these things run your life… If anything causes you anxiety you have to manage that, you have to address that… You all are tweeting, my phone will have notifications… I’ll check it later… That’s fine… I don’t have to reply to everyone…

Q4) You talked about how we are all professional emailers… To what extent is social media also part of everybody’s job now? And how do we build social media in?

A4) In higher ed we see digital champions in organisations… Even if not stated. Email is assumed in our job descriptions… I think social media is starting to weave in in the same ways… We are still feeling out how social media fits into the fabric of our day… The learning curve at the beginning can feel steep if everything is new to you… Twitter took me a year or two to embed in my day, but I’ve found it effective, efficient, and now it’s an essential part of my day. But it’s nice when communication and engagement is part of a job description, it frees people to do that with their day, and ties it to their review process etc.

Workshops 1: Transforming learning by understanding how students use social media as a different space – Andrew Middleton, Head of Academic Practice and Learning Innovation, LEAD, Sheffield Hallam University

I’m assuming that, having come to a conference on social media in learning, you are passionate about learning and teaching… And I think we have to go back to first principles…

Claudia Megele (2015) has, I think, got it spot on about pedagoguey. We are experiencing “a paradigm shift that requires a comprehensive rethink and reconceptualisation of higher education in a rapidly changing socio-technological context where the definition straddles formal and informal behaviours” [check that phrasing].

When we think about formal, that tends to mean spaces like we are in at the moment. Michael Errow makes the point that non-formal is different, something other than the formal teaching and learning space. In a way one way to look at this is to think about disruption, and disrupting the formal. Because of the media and technologies we use, we are disrupting the formal… In that keynote everyone was in what Eric called the “praying” position – all on our phones and laptops… We have changed in these formal spaces… Through our habits and behaviours we are changing our idea of formal, creating our own (parallel) informal space. What does that mean for us as teachers… We have to engage in this non-formal space. From provided to self-constructed, from isolated to connected learning, from directed to self-determined, from construction to co-construction, from impersonal to social, and from the abstract and theoretical to authentic and practical (our employers brief our students through YouTube, through tweet chats – eg a student oncology tweet chat)

 

11:20-11:35 – Refreshment Break

11:35-12:05 – Short Papers 1

12:10-12:40 – Short Papers 2

12:40-13:40 – Lunch

13:40-14:40 – Workshops 2 (afternoon) 

14:40-14:55 – Refreshment Break

14:55-15:25 – Short Papers 3

15:30-16:00 – Short Papers 4

16:00 – Conference ends

Share/Bookmark

Christine Hine: Ethnography for the Internet: exploring multiple meanings of minimal infrastructures [LiveBlog]

Today I am delighted to be at a guest seminar from Christine Hine, from the Department of Sociology, University of Surrey at the University of Edinburgh Department of Sociology. You can read more about the event here.  I’ll be liveblogging her seminar and, as usual any corrections etc. are welcomed. 

Kate Orton-Johnson is introducing us to the session and the format: a formal talk then an then informal Q&A

And now, for Christine Hine…

I am going to talk about Ethnography for the Internet (Hine’s latest book) and then I’ll talk in more detail about the idea of “minimal infrastructures” – the kinds of peer to peer infrastructures (I’ll be talking about Freecycle), and some work I’ve been doing with Alix Rufas Ripol from Maastricht University.

I am going to be talking about this three way conceptualisation of the internet – as embedded, embodied, everyday – to talk about why some strategies are useful in research on the internet. And I’ll go on to talk about some of the challenges about this.

In my background… I was writing a handbook chapter last week and looking back and found myself saying “yes, I’ve been doing ethnographies of the internet for 20 years”… And the internet has such a different meaning now. My work began as the internet was just beginning to be seen as an ethnographic space as a field site to work in. The internet has evolved as a phenomenon, and the way it has become embedded in our day to day life has changed – although I don’t neccassarily buy into this web 1.0/2.0 shift.

And I continue to find Science and Technology studies useful for understanding the internet and the ways in which the internet is an upshot of social processes and site for social innovation, the infrastructural inversions (see e.g. Jeff Balfhurst). And the invisible work which makes this thing function so smoothly. So these ideas have been important, as has the idea of the internet as both culture, and a cultural artefact. Our expetations of it are shaped by social interaction, it impacts on us but it is impacted upon by us. We are shaped in what we do with it by our peer networks, what we see others doing with it, how the mass media presents it.

So my key question has been “What do people think they are up to when they use the internet?”

So we are at the point now that online only ethnography is legitimate but only as one choice among many. And many of our theoretical questions are better addressed by multi-sited and multi-modal designs buy what Postil and Pink’s (2012) idea of the “messy web of interconnections”. We don’t know where the site is, we construct that.

Ethnographers of the internet are often drawn in two directions. They are drawn outward, into diverse frames of meaning making. But they are also drawn inward to auto-ethnographic approaches, aimed at capturing modes of experience and feeling and acknowledging that.

There are, what I call, the “three e’s” of the internet…

The Embedded Internet is rarely a transcendent “cyberspace”, we do not grandly “go online”. Instead it is meaningful within specific contexts. It is subject to multiple frames of meaning -making. So you might look at the way it is embedded in towns, in households, or in particular devices (e.g. Freecycle is different on my phone vs my laptop), backchannels (and conversations), institutions (e.g. biologists engaging with their disciplinary colleagues) – and how this embeddedness must make sense for the discipline, of being accountable and rewardable activity, workplaces, structures of reward, accountability and recognition. So if we are conducting an ethnographic study of the internet, or some aspect of the internet, we have to make choices of the frame of meaning making to pursue, both arbitrary and important.

The Embodied Internet is about the idea that “going online” is not necessarily a discrete form of experience. Being online occurs alogside and complements other embodied ways of being and acting in the world. That emphasises the significance of sensory sensitivity in ethnography as we navigate the mediated world. And thinking about contingencies and choices, and what it means to navigate this complex texture, where we cross between different ways of communicating. If we are not just engaged in one discussion or community, we are moving between different ways of being or knowing, we need to know and recognise that… That moment when you try to contact an informant or participant in an interview and you are thinking about how you might approach them, what you don’t know… Reflecting on that, what that means for you to be with these people, etc. and how that can mirror the experience of others. All of these spaces let us have the same experience, in some way, with th eparticipants in the setting. We may not be full participants, we are using the same medium and can use that as a resource.

The Everyday Internet is indexing a very specific methodological problem – the fact that what we want to look at and study is not neccassarily what our participants want to talk about. We want to look at varying visibility of the phenomenon “internet” and specific platforms…Ethnographers have always relied on observing and eavesdropping and that is much harder to do here. It is an issue of dealing with silence in everydat discourse. Examining the specificity of occassions when the internet is foregrounded as such. Sometimes. If you look at newspapers now, versus 15 years ago… There is some commonality about the coverage of the internet as a problematic, disruptive, corrupting space… But now it is not “the internet” but specific platforms. So it can be topical at the same time as being almost forgotten. So we have to treat the silence and the topicality of infrastructures as complementary methodological challenes.

So, the methodological challenges is that the world does not make sense one medium at a time, but many of our methods carve it up in this way. Ethnography is a really important tool to do that, it is a key resource here. Situations develop rapidly and unpredictably before we have stable methods to suit them. So an ethnography that can move through this terrain and reflect upon it is certainly an important part of the reportoire. And we are also in a world where there is a real complexity about understanding where “there” is. So we have to take responsibility for crafting objects to study to suit strategic objectives.

We have to turn to reflexivity, autoethnographu to explore the individualised experience – a way to deal with this silence that we encounter. We need to use connective and mobile methods to explore interdeterminate and emergent fields. Actually using visualisation and large scale data analysis can aid us to formulate questions. We also need responsive methods.

So, that was a swooping overview. I now want to talk about a particular example.

Share/Bookmark

Data Science for Media Summit LiveBlog

Today I am at the “Data Science for Media Summit” hosted by The Alan Turing Institute & University of Edinburgh and taking place at the Informatics Forum in Edinburgh. This promises to be an event exploring data science opportunities within the media sector and the attendees are already proving to be a diverse mix of media, researchers, and others interesting in media collaborations. I’ll be liveblogging all day – the usual caveats apply – but you can also follow the tweets on #TuringSummit.

Introduction – Steve Renals, Informatics

I’m very happy to welcome you all to this data science for media summit, and I just wanted to explain that idea of a “summit”. This is one of a series of events from the Alan Turing Institute, taking place across the UK, to spark new ideas, new collaborations, and build connections. So researchers understanding areas of interest for the media industry. And the media industry understanding what’s possible in research. This is a big week for data science in Edinburgh, as we also have our doctoral training centre so you’ll also see displays in the forum from our doctoral students.

So, I’d now like to handover to Howard Covington, Chair, Alan Turing Institute

Introduction to the Alan Turing Institute (ATI) – Howard Covington, Chair, ATI

To introduce ATI I’m just going to cut to out mission, to make the UK the world leader in data science and data systems.

ATI came about from a government announcement in March 2014, then bidding process leading to universities chosen in Jan 2015, joint venture agreement between the partners (Cambridge, Edinburgh, Oxford, UCL, Warwick) in March 2015, and Andrew Blake, the institute’s director takes up his post this week. He was before now the head of research for Microsoft R&D in the UK.

Those partners already have about 600 data scientists working for them and we expect ATI to be an organisation of around 700 data scientists as students etc. come in. And the idea of the data summits – there are about 10 around the UK – for you to tell us your concerns, your interests. We are also hosting academic research sessions for them to propose their ideas. 

Now, I’ve worked in a few start ups in my time and this is going at pretty much as fast a pace as you can go.

We will be building our own building, behind the British Library opposite the Frances Crick building. There will be space at that HQ for 150 peaople. There is £67m of committed funding for the first 5 years – companies and organisations with a deep interest who are committing time and resources to the institute. And we will have our own building in due course.

The Institute sits in a wider ecosystem that includes: Lloyds Register – our first partner who sees huge amounts of data coming from sensors on large structures; GCHQ – working with them on the open stuff they do, and using their knowledge in keeping data safe and secure; EPSRC – a shareholder and partner in the work. We also expect other partners coming in from various areas, including the media.

So, how will we go forward with the Institute? Well we want to do both theory and impact. So we want major theoretical advances, but we will devote time equally to practical impactful work. Maths and Computer Science are both core, but we want to be a broad organisation across the full range of data science, reflecting that we are a national centre. But we will have to take a specific interest in particular interest. There will be an ecosystem of partners. And we will have a huge training programme with around 40 PhD students per year, and we want those people to go out into the world to take data sciences forward.

Now, the main task of our new director, is working out our science and innovation strategy. He’s starting by understanding where our talents and expertise already sit across our partners. We are also looking at the needs of our strategic partners, and then the needs emerging from the data summits, and the academic workshops. We should then soon have our strategy in place. But this will be additive over time.

When you ask someone what data science is that definition is ever changing and variable. So I have a slide here that breaks the rules of slide presentations really, in that it’s very busy… But data science is very busy. So we will be looking at work in this space, and going into more depth, for instance on financial sector credit scoring; predictive models in precision agriculture; etc. Undercutting all of these is similarities that cross many fields. Things like security and privacy is one such area – we can only go as far as it is appropriate to go with people’s data, and issue both for ATI and for individuals.

I don’t know if you think that’s exciting, but I think it’s remarkably exciting!

We have about 10 employees now, we’ll have about 150 this time next year, and I hope we’ll have opportunity to work with all of you on what is just about the most exciting project going on in the UK at the moment.

And now to our first speaker…

New York Times Labs – Keynote from Mike Dewar, Data Scientist

I’m going to talk a bit about values, and about the importance of understanding the context of what it is we do. And how we embed what we think is important into the code that we write, the systems that we design and the work that we do.

Now, the last time I was in Edinburgh was in 2009 I was doing a Post Doc working on modelling biological data, based on video of flies. There was loads of data, mix of disciplines, and we were market focused – the project became a data analytics company. And, like much other data science, it was really rather invasive – I knew huge amounts about the sex life of fruit flies, far more than one should need too! We were predicting behaviours, understanding correlations between environment and behaviour. I’

I now work at the New York Times R&D and our task is to look 3-5 years ahead of current NYT practice. We have several technologists there, but also colleagues who are really designers. That has forced me up a bit… I am a classically trained engineer – to go out into the world, find the problem, and then solve it by finding some solution, some algorithm to minimise the cost function. But it turns out in media, where we see decreasing ad revenue, and increasing subscription, that we need to do more than minimise the cost function… That basically leads to click bait. So I’m going to talk about three values that I think we should be thinking about, and projects within that area. So, I shall start with Trust…

Trust

It can be easy to forget that much of what we do in journalism is essentially surveillance, so it is crucial that we do our work in a trustworthy way.

So the first thing I want to talk about is a tool called Curriculum, a Chrome browser plug in that observes everything I read online at work. Then it takes chunk of text, aggregates with what others are reading, and projects that onto a screen in the office. So, firstly, the negative… I am very aware I’m being observed – it’s very invasive – and that layer of privacy is gone, that shapes what I do (and it ruins Christmas!). But it also shares what everyone is doing, a sense of what collectively we are working on… It is built in such a way as to make it inherently trustworthy in four ways: it’s open source so I can see the code that controls this project; it is fantastically clearly written and clearly architected so reading the code is actually easy, it’s well commented, I’m able to read it; it respects existing boundaries on the web – it does not read https (so my email is fine) and respects incognito mode; and also I know how to turn it off – also very important.

In contrary to that I want to talk about Editor. This is a text editor like any others… Except whatever you type is sent to a series of micro services which looks for similarity, looking for NYT keyword corpos, and then sends that back to the editor – enabling a tight mark up of their text. The issue is that the writer is used to writing alone, then send to production. Here we are asking the writer to share their work in progress and send it to central AI services at the NYT, so making that trustworthy is a huge challenge, and we need to work out how best to do this.

Legibility

Data scientists have a tendency towards the complex. I’m no different – show me a new tool and I’ll want to play with it and I enjoy a new toy. And we love complex algorithms, especially if we spent years learning about those in grad school. And those can render any data illegible.

So we have [NAME?] an infinite scrolling browser – when you scroll you can continue on. And at the end of each article an algorithm offers 3 different recommendation strands… It’s like a choose your own adventure experience. So we have three recommended articles, based on very simple recommendation engine, which renders them legible. These are “style graph” – things that are similar in style; “collaborative filter” – readers like you also read; “topic graph” – similar in topic. These are all based on the nodes and edges of the connections between articles. They are simple legible concepts, and easy to run so we can use them across the whole NYT corpus. They are understandable to deal with so has a much better chance of resonating with our colleagues.

As a counter point we were tasked with looking at Behavioural Segmentation – to see how we can build different products for them. Typically segmentation is done with demography. We were interested, instead, on using just the data we had, the behavioural data. We arranged all of our pageviews into sessions (arrive at a page through to leave the site). So, for each session we representated the data as a transition matrix to understand the probability of moving from one page to the next… So we can perform clustering of behaviours… So looking at this we can see that there are some clusters that we already know about… We have the “one and dones” – read one article then move on. We found the “homepage watcher” who sit on the homepage and use that as a launching point. The rest though the NYT didn’t have names for… So we now have the “homepage bouncer” – going back and forth from the front page; and the “section page starter” as well, for instance.

This is a simple caymeans (?) model and clustering, very simple but they are dynamic, and effective. However, this is very very radical at NYT, amongst non data scientist. It’s hard to make it resonate to drive any behaviour or design in the building. We have a lot of work to do to make this legible and meaningful for our colleagues.

The final section I want to talk about is Live…

Live

In news we have to be live, we have to work in the timescales of seconds to a minute. In the lab that has been expressed as streams of data – never ending sequences of data arriving at our machines as quickly as possible.

So, one of our projects, Delta, produces a live visualisation of every single page views of the NYT – a pixel for person starting on the globe, then pushing outwards, If you’ve visited the NYT in the last year or so, you’ve generated a pixel on the globe in the lab. We use this to visualise the work of the lab. We think the fact that this is live is very visceral. We always start with the globe… But then we show a second view, using the same pixels in the context of sections, of the structure of the NYT content itself. And that can be explored with an XBox controller. Being live makes it relevant and timely, to understand current interests and content. It ties people to the audience, and encourages other parts of the NYT to build some of these live experiences… But one of the tricky things of that is that it is hard to use live streams of data, hence…

Streamtools, a tool for managing livestreams of data. It should be reminscent of Similink or LabView etc. [when chatting to Mike earlier I suggested it was a superpimped, realtime Yahoo Pipes and he seemed to agree with that description too]. It’s now on it’s third incarnation and you can come and explore a demo throughout today.

Now, I’ve been a data scientist and involved when we bring our systems to the table we need to be aware that what we build embodies our own values. And I think that for data science in media we should be building trustworthy systems, tools which are legible to others, and those that are live.

Find out more at nytlabs.com

Q&A

Q1) I wanted to ask about expectations. In a new field it can be hard to manage expectations. What are your users expectations for your group and how do you manage that?

A1) The expectations in R&D, in which we have one data scientist and a bunch of designers. We make speculative futures, build prototypes, bring them to NYT, to the present, to help them make decisions about the future. In terms of data science in general at NYT… Sometimes things look magic and look lovely but we don’t understand how they work, in other places it’s much simpler, e.g. counting algorithms. But there’s no risk of a data science winter, we’re being encouraged to do more.

Q2) NYT is a paper of record, how do you manage risk?

A2) Our work is informed by a very well worded privacy statement that we respect and build our work on. But the other areas of ethics etc. is still to be looked at.

Q3) Much of what you are doing is very interactive and much of data science is about processing large sets of data… So can you give any tips for someone working with Terrabytes of data for working with designers?

A3) I think a data scientist essentially is creating a palate of colours for your designer to work with. And forcing you to explain that to the designer is useful, and enables those colours to be used. And we encourage that there isn’t just one solution, we need to try many. That can be painful as a data scientist as some of your algorithms won’t get used, but, that gives some great space to experiment and find new solutions.

Data Journalism Panel Session moderated by Frank O’Donnell, Managing Editor of The Scotsman, Edinburgh Evening News and Scotland on Sunday

We’re going to start with some ideas of what data journalism is

Crina Boros, Data Journalist, Greenpeace

I am a precision journalist.  and I have just joined Greenpeace having worked at Thomson Reuters, BBC Newsnight etc. And I am not a data scientist, or a journalist. I am a pre-journalist working with data. At Greenpeace data is being used for investigate journalism purposes, areas no longer or rarely picked up by mainstream media, to find conflicts of interest, and to establish facts and figures for use in journalism, in campaigning. And it is a way to protect human sources and enable journalists in their work. I have, in my role, both used data that exists, created data when it does not exist. And I’ve sometimes worked with data that was never supposed to see the light of data.

Evan Hensleigh, Visual Data Journalist, The Economist

I was originally a designer and therefore came into information visualisation and data journalism by a fairly convoluted route. At the Economist we’ve been running since the 1890s and we like to say that we’ve been doing data science since we started. We were founded at the time of the Corn Laws in opposition to those proposals, and visualised the impact of those laws as part of that.

The way we now tend to use data is to illustrate a story we are already working on. For instance working on articles on migration in Europe, and looking at fortifications and border walls that have been built over the last 20 to 30 years lets you see the trends over time – really bringing to life the bigger story. It’s one thing to report current changes, but to see that in context is powerful.

Another way that we use data is to investigate changes – a colleague was looking at changes in ridership on the Tube, and the rise of the rush hour – and then use that to trigger new articles.

Rachel Schutt, Chief Data Scientist, Newscorp

I am not a journalist but I am the Chief Data Scientist at Newscorp, and I’m based in New York. My background is a PhD in statistics, and I used to work at Google in R&D and algorithms. And I became fascinated by data science so started teaching an introductory course at Columbia, and wrote a book on this topic. And what I now do at Newscorp is to use data as a strategic asset. So that’s about using data to generate value – around subscriptions, advertising etc. But we also have data journalism so I increasingly create opportunities for data scientists, engineers, journalists, and in many cases a designer so that they can build stories with data at the core.

We have both data scientists, but also data engineers  – so hybrid skills are around engineering, statistical analysis, etc. and sometimes individual’s skills cross those borders, sometimes it’s different people too. And we also have those working more in design and data visualisation. So, for instance, we are now getting data dumps – the Clinton emails, transcripts from Ferguson etc. – and we know those are coming so can build tools to explore those.

A quote I like is that data scientists should think like journalists (from DJ Patel) – in any industry. In Newscorp we also get to learn from journalists which is very exciting. But the idea is that you have to be investigative, be able to tell a story, to

Emily Bell says “all algorithms are editorial” – because value judgements are embedded in those algorithms, and you need to understand the initial decisions that go with that.

Jacqui Maher, Interactive Journalist, BBC News Labs
I was previously at the NYT, mainly at the Interactive News desk in the newsroom. An area crossing news, visualisation, data etc. – so much of what has already been said. And I would absolutely agree with Rachel about the big data dumps and looking for the story – the last dump of emails I had to work with were from Sarah Palin for instance.

At the BBC my work lately has been on a concept called “Structured Journalism” – so when we report on a story we put together all these different entities in a very unstructured set of data as audio, video etc. Many data scientists will try to extract that structure back out of that corpus… So we are looking at how we might retain the structure that is in a journalist’s head, as they are writing the story. So digital tools that will help journalists during the investigative process. And ways to retain connections, structures etc. And then what can we do with that… What can make it more relevant to readers/viewers – context pieces, ways of adding context in a video (a tough challenge).

If you look at work going on elsewhere, for instance at the Washington Post working on IS, are looking at how to similarly add context, how they can leverage previous reporting without having to do that from scratch.

Q&A/Discussion

Q1 – FOD) At a time when we have to cut staff in media, in newspapers in particular, how do we justify investing in data science, or how do we use data science.

A1 – EH) Many of the people I know came out of design backgrounds. You can get pretty far just using available tools. There are a lot of useful tools out there that can help your work.

A1 – CB) I think this stuff is just journalism, and these are just another sets of tools. But there is a misunderstanding that you don’t press a button and get a story. You have to understand that it takes time,  there’s a reason that it is called precision journalism. And sometimes the issue is that the data is just not available.

A1 – RS) Part of the challenge is about traditional academic training and what is and isn’t included here.. But there are more academic programmes on data journalism. It’s a skillset issue. I’m not sure that, on a pay basis, whether data journalists should get paid more than other journalists…

A1 – FOD) I have to say in many newsrooms journalists are not that numerate. Give them statistics, even percentages and that can be a challenge. It’s almost a badge of honour as wordsmiths…

A1 – JM) I think most newsrooms have an issue of silos. You also touched on the whole “math is hard” thing. But to do data journalism you don’t need to be a data scientist. They don’t have to be an expert on maths, stats, visualisation etc. At my former employer I worked with Mike – who you’ve already heard from – who could enable me to cross that barrier. I didn’t need to understand the algorithms, but I had that support. You do see more journalist/designer/data scientists working together. I think eventually we’ll see all of those people as journalists though as you are just trying to tell the story using the available tools.

Q2) I wanted to ask about the ethics of data journalism. Do you think that to do data journalism there is a developing field of ethics in data journalism?

A1 – JM) I think that’s a really good question in journalism… But I don’t think that’s specific to data journalism. When I was working at NYT we were working on the Wikileaks data dumps, and there were huge ethical issues there and around the information that was included there in terms of names, in terms of risk. And in the end the methods you might take – whether blocking part of a document out – the technology mignt vary but the ethical issues are the same.

Q2 follow up FOD) And how were those ethical issues worked out?

A1 – JM) Having a good editor is also essential.

A1 – CB) When I was at Thomson Reuters I was involved in running womens rights surveys to collate data and when you do that you need to apply research ethics, with advice from those appropriately positioned to do that.

A1 – RS) There is an issue that traditionally journalists are trained in ethics but data scientists are not trained in ethics. We have policies in terms of data privacy… But much more to do. And it comes down to the person who is building a data model – ad you have to be aware of the possible impact and implications of that model. And risks also of things like the Filter Bubble (Pariser 2011).

Q3 – JO) One thing that came through listening to ? and Jackie, it’s become clear that journalism is a core part of journalism… You can’t get the story without the data. So, is there a competitive advantage to being able to extract that meaning from the data – is there a data science arms race here?

A3 – RS) I certainly look out to NYT and other papers I admire what they do, but of course the reality is messier than the final product… But there is some of this…

A3 – JM) I think that if you don’t engage with data then you aren’t keeping up with the field, you are doing yourself a professional misservice.

A3 – EH) There is a need to keep up. We are a relatively large group, but nothing like the scale of NYT… So we need to find ways to tell stories that they won’t tell, or to have a real sense of what an Economist data story looks like. Our team is about 12 or 14, that’s a pretty good side.

A3 – RS) Across all of our businesses there are 100s in data science roles, of whom only a dozen or so are on data journalism side.

A3 – JM) At the BBC there are about 40 or 50 people on the visual journalism team. But there are many more in data science in other roles, people at the World Service. But we have maybe a dozen people in the lab at any given moment.

Q4) I was struck by the comment about legibility, and a little bit related to transparancy in data. Data is already telling a story, there is an editorial dimension, and that is added to in the presentation of the data… And I wonder how you can do that to improve transparancy.

A4 – JM) There are many ways to do that… To show your process, to share your data (if appropriate). Many share code on GitHub. And there is a question there though – if someone finds something in the data set, what’s the feedback loop.

A4 – CB) In the past where I’ve worked we’ve shared a document on the step by step process used. I’m not a fan of sharing on GitHub, I think you need to hand hold the reader through the data story etc.

Q5) Given that journalims is about holding companies to account… In a world where, e.g. Google, are the new power brokers, who will hold them to account. I think data journalism needs a merge between journalism, data science, and designers… Sometimes that can be in one person… And what do you think about journalism playing a role in holding new power brokers to account.

A5 – EH) There is a lot of potential. These companies publish a lot of data and/or make their data available. There was some great work on 5:38 about Uber, based on a Freedom of Information request to essentially fact check Uber’s own statistics and reporting of activities.

Q6) Over the years we’ve (Robert Gordan Univerity) worked with journalists from various organisations. I’ve noticed that there is an issue, not yet raised, that journalists are always looking for a particular angle in data as they work with it… It can be hard to get an understanding from the data, rather than using the data to reinforce bias etc.

A6 – RS) If there is an issue of taking a data dump from e.g. Twitter to find a story… Well dealing with that bias does come back to training. But yes, there is a risk of journalists getting excited, wanting to tell a novel story, without being checked with colleagues, correcting analysis.

A6 – CB) I’ve certainly had colleagues wanting data to substantiate the story, but it should be the other way around…

Q6) If you, for example, take the Scottish Referendum and the General Election and you see journalists so used to watching their dashboard and getting real time feedback, they use them for the stories rather than doing any real statistical analysis.

A6 – CB) That’s part of the usefulness of reason for reading different papers and different reporters covering a topic – and you are expected to have an angle as a journalist.

A6 – EH) There’s nothing wrong with an angle or a hunch but you also need to use the expertise of colleagues and experts to check your own work and biases.

A6 – RS) There is a lot more to understand how the data has come about, and people often use the data set as a ground truth and that needs more thinking about. It’s somewhat taught in schools, but not enough.

A6 – JM) That makes me think of a data set called gdump(?), which captures media reporting and enables event detection etc. I’ve seen stories of a journalist looking at that data as a canonical source for all that has happened – and that’s a misunderstanding of how that data set has been collected. It’s close to a canonical source for reporting but that is different. So you certainly need to understand how the data has come about.

Comment – FOD) So, you are saying that we can think we are in the business of reporting fact rather than opinion but it isn’t that simple at all.

Q7) We have data science, is there scope for story science? A science and engineering of generating stories…

A7 – CB) I think we need a teamwork sort of approach to story telling… With coders, with analysts looking for the story… The reporters doing field reporting, and the data vis people making it all attractive and sexy. That’s an ideal scenario…

A7 – RS) There are companies doing automatic story generation – like Narrative Science etc. already, e.g. on Little League matches…

Q7 – comment) Is that good?

A7 – RS) Not necessarily… But it is happening…

A7 – JM) Maybe not, but it enables story telling at scale, and maybe that has some usefulness really.

Q8/Comment) There was a question about the ethics and the comment that nothing needed there, and the comment about legibility. And I think there is conflict there about

Statistical databases  – infer missing data from the data you have, to make valid inferences but could shock people because they are not actually in the data (e.g. salary prediction). This reminded me of issues such as source protection where you may not explicitly identify the source but that source could be inferred. So you need a complex understanding of statistics to understand that risk, and to do that practice appropriately.

A8 – CB) You do need to engage in social sciences, and to properly understand what you doing in terms of your statistical analysis, your P values etc. There is more training taking place but still more to do.

Q9 – FOD) I wanted to end by coming back to Howard’s introduction. How could ATI and Edinburgh help journalism?

A9 – JM) I think there are huge opportunities to help journalists make sense of large data sets. Whether that is tools for reporting or analysis. There is one, called Detector.io that lets you map reporting for instance that is shutting down and I don’t know why. There are some real opportunities for new tools.

A9 – RS) I think there are areas in terms of curriculum, on design, ethics, privacy, bias… Softer areas not always emphasised in conventional academic programmes but are at least as important as scientific and engineering sides.

A9 – EH) I think generating data from areas where we don’t have it. At the economist we look at China, Asia, Africa where data is either deliberately obscured or they don’t have the infrastructure to collect it. So tools to generate that would be brilliant.

A9 – CB) Understand what you are doing; push for data being available; and ask us and push is to be accountable, and it will open up…

Q10) What about the readers. You’ve been saying the journalists have to understand their stats… But what about the readers who know how to understand the difference between reading the Daily Mail and the Independent, say, but don’t have the data literacy to understand the data visualisation etc.

A10 – JM) It’s a data literacy problem in general…

A10 – EH) Data scientists have the skills to find the information and raise awareness

A10 – CB) I do see more analytical reporting in the US than in Europe. But data isn’t there to obscure anything. But you have to explain what you have done in clear language.

Comment – FOD) It was once the case that data was scarce, and reporting was very much on the ground and on foot. But we are no longer hunter gatherers in the same way… Data is abundant and we have to know how we can understand, process, and find the stories from that data. We don’t have clear ethical codes yet. And we need to have a better understanding of what is being produced. And most of the media most people consume is the local media – city and regional papers – and they can’t yet afford to get into data journalism in a big ways. Relevance is a really important quality. So my personal challenge to the ATI is: how do we make data journalism pay?

And with that we are off for lunch and demos, but I’ll be blogging again from 1.50 (afternoon programme is below)… 

Ericsson, Broadcast & Media Services – Keynote from Steve Plunkett, CTO

Audience Engagement Panel Session
• Paul Gilooly – Director of Emerging Products, MTG
• Steve Plunkett – CTO, Broadcast & Media Services, Ericsson
• Pedro Cosa – Data Insights and Analytics Lead, Channel 4
• Hew Bruce-Gardyne – Chief Technology Officer, TV Squared
• Jon Oberlander (Moderator), University of Edinburgh

Networking Break with Demo Sessions

BBC – Keynote from Michael Satterthwaite, Senior Product Manager

Unlocking Value from Media Panel Session
• Michael Satterthwaite – Senior Product Manager, BBC
• Adam Farqhuar – Head of Digital Scholarship, British Library
• Gary Kazantsev R&D Machine Learning Group, Bloomberg
• Richard Callison – brightsolid
• Moderator: Simon King, University of Edinburgh

Share/Bookmark

Clipper @ National Library of Scotland

Today I am at the National Library of Scotland for a Clipper project workshop (info here). Clipper is a project to create a content creation tool for multimedia, with funding from Jisc.

After an introduction from Gill Hamilton Intro it’s over to John Casey who will be leading this through the day…

Introduction – John Casey

The tagline for the project is basically Clipper 1. 2. 3: Clip, Organise, Share.

We want your input early on in the process here but that means we will be trying out a prototype with you – so there will be bugs and issues but we are looking for your comments and feedback etc. The first outing of Clipper was from 2009, as a rapid development project which used Flash and Flex. Then it went to sleep for a while. Then we started working on it again when looking at Open Education in London

Trevor: I’m Trevor Collins – research fellow at the Open University. My background is very technical – computer engineering, HCI but all my research work is around the context of learning and teaching. And we have a common interest in HTML5 video. And my interest is working, with you, to ensure this will be helpful and useful.

Will: My name is Will and my background is engineering. Originally I worked with John on this project in Flash etc. but that’s really died out and, in the meantime HTML has really moved on a long way and with video in HTML5 we can just use the browser as the foundation, potentially, for some really interesting application. For me my interest today is in the usability of the interface.

With that we have had some introductions… It is a really interesting group of multimedia interested folk.

John Casey again:

This project is funded by Jisc as part of the Research Data Spring Initiative, and that is about technical tools, software and service solutions to support the researchers workflow, the use and mangement of their data. Now it’s interesting that this room is particuarly interested in teaching and learning, we are funded for researcher use but of course that does not proclude teaching and learning use.

The project partners here are City of Glasgow College as lead, The Open University and ?

So, what is Clipper? One of the challenges is explaining what this project is… And what it is not. So we are punting it as a research tool for digital research with online media / time-based media (ie audio/video data). The aim is to create a software toolkit (FOSS) deployed in an institution or operated as a n national service. We are about community engagement and collavorative design delivering a responsive design. And that’s why we are here.

So, why do this? Well time-based media is a large and “lumpy” data format, hard to analyse and even harder to share your analysis. There are barriers to effective (re)use of audio and video data including closed collections (IPR) and proprietary tools and formats. So we want to be able to create a “virtual clip” – and that means not copying any data, just metadata. So start and stop points on reference URI. And then also being able to organise that clip, to annotate it, and group into cliplists. So playlists of clips of excerpts etc. And then we can share using cool URIs for those clips and playlists.

This means bringing audio and video data to live, enabling analysis without breaking copyright or altering the soure data. We think it had streamlined workflows and facilitate collaboration. And we think it will lead to new things. It is secure and safe – respecting existing access permissions to data and does not alter or duplicate the original files. And it creates opportunities for citizen science/citizen research; user generated content – e.g. crowd sourcing etdata and user analytics. Colleagues in Manchester, for instance, have a group of bus enthusiasts who may be up for annotating old bus footage. The people who use your archives or data can generate analytics or para data and use of that can be useful and interesting as well.

So Clipped is… An online media analysis and collaboration tool for digital researchers (ie it supports human-based qualitative analysis, collavoboration and sharing. It is not an online audio/video editing tool. It is not a data repository. It is not using machine analysis of time based media. 

Demonstration

John: The best way to understand this stuff is to demonstrate and test this stuff out. We are going to take you through three workflows – these are just examples: (1) One source file, many clips, (2) Many source files, many clips, (3) Many source files, many clips, and annotations.

Over to Trevor and Will for examples.

Trevor: Hopefully as we work through these examples we should get more questions etc. and as we look through these examples.

Do bear in mind that what we will show you today is not a finished product, it’s a prototype. We want you to tell us what is good, what needs changing… You are the first of our three workshops so you get first say on the design! We want clear ideas on what will be useful… We hope it is fairly straightforward and fairly clear. If it isn’t, just tell us.

So, example (1): Analysing a source file – the idea is an app developer (researcher) interviewing a user when testing an app. So the flow is:

  • Create and open a new project
  • Add the source file to the project
  • Preview the file – to find emerging themes etc.
  • Create clips – around those themes.
  • Add clips to cliplist

 

Share/Bookmark