Association of Internet Researchers AoIR 2016 РDay 1 РJos̩ van Dijck Keynote

If you’ve been following my blog today you will know that I’m in Berlin for the Association of Internet Researchers AoIR 2016 (#aoir2016) Conference, at Humboldt University. As this first day has mainly been about workshops – and I’ve been in a full day long Digital Methods workshop – we do have our first conference keynote this evening. And as it looks a bit different to my workshop blog, I thought a new post was in order.

As usual, this is a live blog post so corrections, comments, etc. are all welcomed. This session is also being videoed so you will probably want to refer to that once it becomes available as the authoritative record of the session. 

Keynote: The Platform Society – José van Dijck (University of Amsterdam) with Session Chair: Jennifer Stromer-Galley

 

Share/Bookmark

Association of Internet Researchers AoIR 2016: Day 1 – Workshops

After a few weeks of leave I’m now back and spending most of this week at the Association of Internet Researchers (AoIR) Conference 2016. I’m hugely excited to be here as the programme looks excellent with a really wide range of internet research being presented and discussed. I’ll be liveblogging throughout the week starting with today’s workshops.

I am booked into the Digital Methods in Internet Research: A Sampling Menu workshop, although I may be switching session at lunchtime to attend the Internet rules… for Higher Education workshop this afternoon.

The Digital Methods workshop is being chaired by Patrik Wikstrom (Digital Media Research Centre, Queensland University of Technology, Australia) and the speakers are:

  • Erik Borra (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Axel Bruns (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Jean Burgess (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Carolin Gerlitz (University of Siegen, Germany),
  • Anne Helmond (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Ariadna Matamoros Fernandez (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Peta Mitchell (Digital Media Research Centre, Queensland University of Technology, Australia),
  • Richard Rogers (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Fernando N. van der Vlist (Digital Methods Initiative, University of Amsterdam, the Netherlands),
  • Esther Weltevrede (Digital Methods Initiative, University of Amsterdam, the Netherlands).

I’ll be taking notes throughout but the session materials are also available here: http://tinyurl.com/aoir2016-digmethods/.

Patrik: We are in for a long and exciting day! I won’t introduce all the speakers as we won’t have time!

Conceptual Introduction: Situating Digital Methods (Richard Rogers)

My name is Richard Rogers, I’m professor of new media and digital culture at the University of Amsterdam and I have the pleasure of introducing today’s session. So I’m going to do two things, I’ll be situating digital methods in internet-related research, and then taking you through some digital methods.

I would like to situate digital methods as a third era of internet research… I think all of these eras thrive and overlap but they are differentiated.

  1. Web of Cyberspace (1994-2000): Cyberstudies was an effort to see difference in the internet, the virtual as distinct from the real. I’d situate this largely in the 90’s and the work of Steve Jones and Steve (?).
  2. Web as Virtual Society? (2000-2007) saw virtual as part of the real. Offline as baseline and “virtual methods” with work around the digital economy, the digital divide…
  3. Web as societal data (2007-) is about “virtual as indication of the real. Online as baseline.

Right now we use online data about society and culture to make “grounded” claims.

So, if we look at Allrecipes.com Thanksgiving recipe searches on a map we get some idea of regional preference, or we look at Google data in more depth, we get this idea of internet data as grounding for understanding culture, society, tastes.

So, we had this turn in around 2008 to “web as data” as a concept. When this idea was first introduced not all were comfortable with the concept. Mike Thelwell et al (2005) talked about the importance of grounding the data from the internet. So, for instance, Google’s flu trends can be compared to Wikipedia traffic etc. And with these trends we also get the idea of “the internet knows first”, with the web predicting other sources of data.

Now I do want to talk about digital methods in the context of digital humanities data and methods. Lev Manovich talks about Cultural Analytics. It is concerned with digitised cultural materials with materials clusterable in a sort of art historical way – by hue, style, etc. And so this is a sort of big data approach that substitutes “continuous change” for periodisation and categorisation for continuation. So, this approach can, for instance, be applied to Instagram (Selfiexploration), looking at mood, aesthetics, etc. And then we have Culturenomics, mainly through the Google Ngram Viewer. A lot of linguists use this to understand subtle differences as part of distance reading of large corpuses.

And I also want to talk about e-social sciences data and method. Here we have Webometrics (Thelwell et al) with links as reputational markers. The other tradition here is Altmetrics (Priem et al), which uses online data to do citation analysis, with social media data.

So, at least initially, the idea behind digital methods was to be in a different space. The study of online digital objects, and also natively online method – methods developed for the medium. And natively digital is meant in a computing sense here. In computing software has a native mode when it is written for a specific processor, so these are methods specifically created for the digital medium. We also have digitized methods, those which have been imported and migrated methods adapted slightly to the online.

Generally speaking there is a sort of protocol for digital methods: Which objects and data are available? (links, tags, timestamps); how do dominant devices handle them? etc.

I will talk about some methods here:

1. Hyperlink

For the hyperlink analysis there are several methods. The Issue Crawler software, still running and working, enable you to see links between pages, direction of linking, aspirational linking… For example a visualisation of an Armenian NGO shows the dynamics of an issue network showing politics of association.

The other method that can be used here takes a list of sensitive sites, using Issue Crawler, then parse it through an internet censorship service. And variations on this that indicate how successful attempts at internet censorship are. We do work on Iran and China and I should say that we are always quite thoughtful about how we publish these results because of their sensitivity.

2. The website as archived object

We have the Internet Archive and we have individual archived web sites. Both are useful but researcher use is not terribly signficant so we have been doing work on this. See also a YouTube video called “Google and the politics of tabs” – a technique to create a movie of the evolution of a webpage in the style of timelapse photography. I will be publishing soon about this technique.

But we have also been looking at historical hyperlink analysis – giving you that context that you won’t see represented in archives directly. This shows the connections between sites at a previous point in time. We also discovered that the “Ghostery” plugin can also be used with archived websites – for trackers and for code. So you can see the evolution and use of trackers on any website/set of websites.

6. Wikipedia as cultural reference

Note: the numbering is from a headline list of 10, hence the odd numbering… 

We have been looking at the evolution of Wikipedia pages, understanding how they change. It seems that pages shift from neutral to national points of view… So we looked at Srebenica and how that is represented. The pages here have different names, indicating difference in the politics of memory and reconciliation. We have developed a triangulation tool that grabs links and references and compares them across different pages. We also developed comparative image analysis that lets you see which images are shared across articles.

7. Facebook and other social networking sites

Facebook is, as you probably well know, is a social media platform that is relatively difficult to pin down at a moment in time. Trying to pin down the history of Facebook find that very hard – it hasn’t been in the Internet Archive for four years, the site changes all the time. We have developed two approaches: one for social media profiles and interest data as means of stufying cultural taste ad political preference or “Postdemographics”; And “Networked content analysis” which uses social media activity data as means of studying “most engaged with content” – that helps with the fact that profiles are no longer available via the API. To some extend the API drives the research, but then taking a digital methods approach we need to work with the medium, find which possibilities are there for research.

So, one of the projects undertaken with in this space was elFriendo, a MySpace-based project which looked at the cultural tastes of “friends” of Obama and McCain during their presidential race. For instance Obama’s friends best liked Lost and The Daily Show on TV, McCain’s liked Desperate Housewives, America’s Next Top Model, etc. Very different cultures and interests.

Now the Networked Content Analysis approach, where you quantify and then analyse, works well with Facebook. You can look at pages and use data from the API to understand the pages and groups that liked each other, to compare memberships of groups etc. (at the time you were able to do this). In this process you could see specific administrator names, and we did this with right wing data working with a group called Hope not Hate, who recognised many of the names that emerged here. Looking at most liked content from groups you also see the shared values, cultural issues, etc.

So, you could see two areas of Facebook Studies, Facebook I (2006-2011) about presentation of self: profiles and interests studies (with ethics); Facebook II (2011-) which is more about social movements. I think many social media platforms are following this shift – or would like to. So in Instagram Studies the Instagram I (2010-2014) was about selfie culture, but has shifed to Instagram II (2014-) concerned with antagonistic hashtag use for instance.

Twitter has done this and gone further… Twitter I (2006-2009) was about urban lifestyle tool (origins) and “banal” lunch tweets – their own tagline of “what are you doing?”, a connectivist space; Twitter II (2009-2012) has moved to elections, disasters and revolutions. The tagline is “what’s happening?” and we have metrics “trending topics”; Twitter III (2012-) sees this as a generic resource tool with commodification of data, stock market predictions, elections, etc.

So, I want to finish by talking about work on Twitter as a storytelling machine for remote event analysis. This is an approach we developed some years ago around the Iran event crisis. We made a tweet collection around a single Twitter hashtag – which is no longer done – and then ordered by most retweeted (top 3 for each day) and presented in chronological (not reverse) order. And we then showed those in huge displays around the world…

To take you back to June 2009… Mousavi holds an emergency press conference. Voter turn out is 80%. SMS is down. Mousavi’s website and Facebook are blocked. Police use pepper spray… The first 20 days of most popular tweets is a good succinct summary of the events.

So, I’ve taken you on a whistle stop tour of methods. I don’t know if we are coming to the end of this. I was having a conversation the other day that the Web 2.0 days are over really, the idea that the web is readily accessible, that APIs and data is there to be scraped… That’s really changing. This is one of the reasons the app space is so hard to research. We are moving again to user studies to an extent. What the Chinese researchers are doing involves convoluted processes to getting the data for instance. But there are so many areas of research that can still be done. Issue Crawler is still out there and other tools are available at tools.digitalmethods.net.

Twitter studies with DMI-TCAT (Erik Borra)

I’m going to be talking about how we can use the DMI-TCAT tool to do Twitter Studies. I am here with Emile den Tex, one of the original developers of this tool, alongside Eric Borra.

So, what is DMI-TCAT? It is the Digital Methods Initiative Twitter Capture and Analysis Toolset, a server side tool which tries to capture robust and reproducible data capture and analysis. The design is based on two ideas: that captured datasets can be refined in different ways; and that the datasets can be analysed in different ways. Although we developed this tool, it is also in use elsewhere, particularly in the US and Australia.

So, how do we actually capture Twitter data? Some of you will have some experience of trying to do this. As researchers we don’t just want the data, we also want to look at the platform in itself. If you are in industry you get Twitter data through a “data partner”, the biggest of which by far is GNIP – owned by Twitter as of the last two years – then you just pay for it. But it is pricey. If you are a researcher you can go to an academic data partner – DiscoverText or Hexagon – and they are also resellers but they are less costly. And then the third route is the publicly available data – REST APIs, Search API, Streaming APIs. These are, to an extent, the authentic user perspective as most people use these… We have built around these but the available data and APIs shape and constrain the design and the data.

For instance the “Search API” prioritises “relevance” over “completeness” – but as academics we don’t know how “relevance” is being defined here. If you want to do representative research then completeness may be most important. If you want to look at how Twitter prioritises the data, then that Search API may be most relevant. You also have to understand rate limits… This can constrain research, as different data has different rate limits.

So there are many layers of technical mediation here, across three big actors: Twitter platform – and the APIs and technical data interfaces; DMI-TCAT (extraction); Output types. And those APIs and technical data interfaces are significant mediators here, and important to understand their implications in our work as researchers.

So, onto the DMI-TCAT tool itself – more on this in Borra & Reider (2014) (doi:10.1108/AJIM-09-2013-0094). They talk about “programmed method” and the idea of the methodological implications of the technical architecture.

What can one learn if one looks at Twitter through this “programmed method”? Well (1) Twitter users can change their Twitter handle, but their ids will remain identical – sounds basic but its important to understand when collecting data. (2) the length of a Tweet may vary beyond maximum of 140 characters (mentions and urls); (3) native retweets may have their top level text property stortened. (4) Unexpected limitations  support for new emoji characters can be problematic. (5) It is possible to retrieve a deleted tweet.

So, for example, a tweet can vary beyond 140 characters. The Retweet of an original post may be abbreviated… Now we don’t want that, we want it to look as it would to a user. So, we capture it in our tool in the non-truncated version.

And, on the issue of deletion and witholding. There are tweets deleted by users, and their are tweets which are withheld by the platform – and the withholding is a country by country issue. But you can see tweets only available in some countries. A project that uses this information is “Politwoops” (http://politwoops.sunlightfoundation.com/) which captures tweets deleted by US politicians, that lets you filter to specific states, party, position. Now there is an ethical discussion to be had here… We don’t know why tweets are deleted… We could at least talk about it.

So, the tool captures Twitter data in two ways. Firstly there is the direct capture capabilities (via web front-end) which allows tracking of users and capture of public tweets posted by these users; tracking particular terms or keywords, including hashtags; get a small random (approx 1%) of all public statuses. Secondary capture capabilities (via scripts) allows further exploration, including user ids, deleted tweets etc.

Twitter as a platform has a very formalised idea of sociality, the types of connections, parameters, etc. When we use the term “user” we mean it in the platform defined object meaning of the word.

Secondary analytical capabilities, via script, also allows further work:

  1. support for geographical polygons to delineate geographical regions for tracking particular terms or keywords, including hashtags.
  2. Built-in URL expander, following shortened URLs to their destination. Allowing further analysis, including of which statuses are pointing to the same URLs.
  3. Download media (e.g. videos and images (attached to particular Tweets).

So, we have this tool but what sort of studies might we do with Twitter? Some ideas to get you thinking:

  1. Hashtag analysis – users, devices etc. Why? They are often embedded in social issues.
  2. Mentions analysis – users mentioned in contexts, associations, etc. allowing you to e.g. identify expertise.
  3. Retweet analysis – most retweeted per day.
  4. URL analysis – the content that is most referenced.

So Emile will now go through the tool and how you’d use it in this way…

Emile: I’m going to walk through some main features of the DMI TCAT tool. We are going to use a demo site (http://tcatdemo.emiledentex.nl/analysis/) and look at some Trump tweets…

Note: I won’t blog everything here as it is a walkthrough, but we are playing with timestamps (the tool uses UTC), search terms etc. We are exploring hashtag frequency… In that list you can see Bengazi, tpp, etc. Now, once you see a common hashtag, you can go back and query the dataset again for that hashtag/search terms… And you can filter down… And look at “identical tweets” to found the most retweeted content. 

Emile: Eric called this a list making tool – it sounds dull but it is so useful… And you can then put the data through other tools. You can put tweets into Gephi. Or you can do exploration… We looked at Getty Parks project, scraped images, reverse Google image searched those images to find the originals, checked the metadata for the camera used, and investigated whether the cost of a camera was related to the success in distributing an image…

Richard: It was a critique of user generated content.

Analysing Social Media Data with TCAT and Tableau (Axel Bruns)

Analysing Network Dynamics with Agent Based Models (Patrik Wikström)

Tracking the Trackers (Anne Helmond, Carolin Gerlitz, Esther Weltevrede and Fernando van der Vlist)

Multiplatform Issue Mapping (Jean Burgess & Ariadna Matamoros Fernandez)

Analysing and visualising geospatial data (Peta Mitchell)

 

Share/Bookmark

eLearning@ed/LTW Monthly Meet Up #4: Learning Design

This is a very belated posting of my liveblog notes from the eLearning@Ed/LTW Monthly Meet Up #4 on Learning Design which took place on 25th April 2016. You can find further information on the event, and all of our speakers’ slides, on the eLearning@ed wiki.

Despite the delay in posting these notes, the usual cautionary notes apply, and that all corrections, additions, etc. are very much welcomed. 

Becoming an ELDeR – Fiona Hale, Senior eLearning Advisor, IS

Unfortunately I missed capturing notes for the very beginning of Fiona’s talk but I did catch most of it. As context please be aware that she was talking about a significant and important piece of work on Learning Design, including a scoping report by Fiona, which has been taking place over the last year. My notes start as she addresses the preferred formats for learning design training… 

We found that two-day workshops provided space to think, to collaborate, and had the opportunity to both gain new knowledge and apply it on the same day. And also really useful for academic staff to understand the range of colleagues in the room, knowing who they could and should follow up with.

Scoping report recommended developing reusable and collaborative learning design as a new university services within IS, which positions the learning design framework as a scaffold, support staff as facilitators, etc.

There are many recommendations here but in particular I wanted to talk about the importance of workshops being team based and collaborative in approach – bringing together programme team, course team, admin, LT, peer, student, IAD, IS Support librarian, IS EDE, Facilitator, all in the room. Also part of staff development, reward and recognition – tying into UKSPF (HEA) and the Edinburgh Teaching Award. And ensuring this is am embedded process, with connection to processes, language, etc. with registry, board of studies, etc. And also with multiple facilitators.

I looked for frameworks and focused on three to evaluate. These tend to be theoretical, and don’t always work in practice. After trying those all out we found CAIeRO works best, focusing on designing learning experiences over development of content, structured format of the two day workshop. And it combines pedagogy, technology, learner experience.

We have developed the CAIeRO into a slightly different form, the ELDeR Framework, with the addition of assessment and feedback.

Finally! Theory and Practice – Ruth McQuillan, Co-Programme Director, Master of Public Health (online)

Prior to the new MPH programme I have been working in online learning since 2011. I am part of a bigger team – Christine Matthews is our learning technologist and we have others who have come on board for our new programme. Because we had a new programme launching we were very keen to be part of it. So I’m going to talk about how this worked, how we felt about it, etc.

We launched the online MPG in September 2015, which involved developing lots of new courses but also modifying lots of existing courses. And we have a lot of new staff so we wanted to give a sense of building a new team – as well as learning for ourselves how to do it all properly.

So, the stages of the workshop we went through should give you a sense of it. I’ve been on lots of courses and workshops where you learn about something but you don’t have the practical application. And then you have a course to prepare in practice, maybe without that support. So having both aspects together was really good and helpful.

The course we were designing was for mid career professionals from across the world. We were split into two teams – with each having a blend of the kinds of people Fiona talked about – programme team and colleagues from IS and elsewhere. We both developed programme and course mission statements as a group, then compared and happily those were quite close, we reached consensus and that really felt like we were pulling together as a team. And we also checked the course for consistency with the programme.

Next, we looked at the look and feel aspects. We used cards that were relevant for our course, using workshop cards and post it notes, rejecting non relevant cards, using our choice of the cards and some of our own additions.

So, Fiona talked about beginning with the end in mind, and we tried to do that. We started by thinking about what we wanted our students to be able to do at the end of the course. That is important as this is a professional course where we want to build skills and understanding. So, we wanted to focus on what they should know at the end of the course, and only then look at the knowledge they would need. And that was quite a different liberating approach.

And at this point we looked at the SCQF level descriptors to think about learning outcomes, the “On completion of this course you will be able to…” I’m not sure we’d appreciated the value and importance of our learning outcomes before, but actually in the end this was one of the most useful parts of the process. We looked for Sense (are they clear to the learner); Level (are they appropriate to the level of module); Accessibility (are they accessible).

And then we needed to think about assessment and alignment, looking at how we would assess the course, how this fitted into the bigger picture etc.

The next step was to storyboard the course. And by the end of Day One we had a five week course and a sixth week for assessment, we has learning outcomes and how they’d be addressed, assessment, learning activities, concerns, scaffolding. And we thought we’d done a great job! We came back on day two and when we came back we spend maybe half a day recapping, changing… Even if you can’t do a 2 day workshop at least try to do two half days with a big gap between/overnight as we found that space away very helpful.

And once finalised we built a prototype online. And we had a reality check from a critical friend, which was very helpful. We reviewed and adjusted and then made a really detailed action plan. That plan was really helpful.

Now, at the outside we were told that we could come into this process at any point. We had quite a significantly complete idea already and that helped us get real value from this process.

So, how did it feel and what did we learn? Well it was great to have a plan, to see the different areas coming together. The struggle was difficult but important, and it was excellent for team building. “To learn and not to do is really not to learn. To do and not to learn is really not to know. And actually at the end of the day we were really enthusiastic about the process and it was really good to see that process, to put theory into practice, and to do this all in a truly collaborative experience.

How has it changed us? Well we are putting all our new courses through this process. We want to put all our existing courses through this process. We involved more people in the process, in different roles and stages, including students where we can. And we have modified the structure.

Q&A

Q1) Did you go away to do this?

A1) Yes, we went to Dovecot Gallery on Infirmary Street.

A1 – FH) I had some money to do that but I wasn’t kidding that a new space and nice food is important. We are strict on you being there, or not. We expect full on participation. So for those going forward we are looking at rooms in other places – in Evolution House, or in Moray House, etc. Somewhere away from normal offices etc. It has to be a focused. And the value of that is huge, the time up front is really valuable.

A1 – RM) It is also really important for understanding what colleagues are doing, which helps ensure the coherence of the programme, and it is really beneficial to the programme.

Q2) Dow different do you think your design ended up if you hadn’t done this?

A2 – RM) I think one of my colleagues was saying today that she was gently nudged by colleagues to avoid mistakes or pitfalls, to not overload the course, to ensure coherence, etc. I think it’s completely different to how it would have been. And also there were resources and activities – lectures and materials – that could be shared where gaps were recognised.

A2 – FH) If this had been content driven it would be hard as a facilitator. But thinking about the structure, the needs, the learner experience, that can be done, with content and expertise already being brought into that process. It saves time in the long run.

A2 – RM) I know in the past when I’ve been designing courses you can find that you put activities in a particular place without purpose, to make sure there is an activity there… But this process helped keep things clear, coherent and to ensure any activity is clearly linked to a learning outcome, etc.

Q3) Once you’d created the learning outcomes, did you go back and change any of theme?

A3 – FH) On Day 2 there was something that wasn’t quite right…

A3 – RM) It was something too big for the course, and we needed to work that through. The course we were working on in February and that will run for the first time in the new academic year. But actually the UoE system dictates that learning outcomes should be published many months/more than a year in advance. So with new courses we did ask the board of studies if we could provide the learning outcomes to them later on, once defined. They were fine.

A3 – FH) That is a major change that we are working on. But not all departments run the same process or timetable.

A3 – RM) Luckily our board of studies were very open to this, it was great.

Q4) Was there any focus on student interaction and engagement in these process.

A4 – FH) It was part of those cards early in the process, it is part of the design work. And that stage of the cards, the consensus building, those are huge collaborative and valuable sessions.

Q5) And how did you support/require that?

A5 – FH) In that storyboard you will see various (yellow) post its showing assessment and feedback wove in across the course, ensuring the courses you design really do align with that wider University strategy.

Learning Design: Paying It Forward – Christina Matthews

There is a shift across the uni to richer approaches.

I’m going to talk about getting learning technologist involved and why that matters.

The LT can inform the process in useful and creative ways. They can bring insights into particular tools, affordances, and ways to afford or constrain the behaviours of students. They also have a feel for digital literacy of students, as well as being able to provide some continuity across the course in terms of approaches and tools. And having LT in the design process, academic staff can feel supported and better able to take risks and do new things. And the LT can help that nothing is lost between the design workshop, and the actual online course and implementation.

So, how are we paying this forward? Well we are planning learning design workshops for all our new courses for 2015-16 and 2016-17. We really did feel the benefits of 2 days but we didn’t think it was going to be feasible for all of our teams. We felt that we needed to adapt the workshop to fit into one day, so we will be running these as one day workshops and we have prioritised particular aspects to enable that.

The two day workshop format for CAIeRO follows several stages:

  • Stage 1: Course blueprint (mission, learning outcomes, assessment and feedback)
  • Stage 2: Storyboarding
  • Stage 3: Rapid prototyping in the VLE
  • Stage 4: Critical friend evaluation of VLE prototype
  • Stage 5: adjust and review from feedback
  • Stage 6: Creating an action plan
  • Stage 7: reflecting on the workshop in relation to the UK Professional Standards Framework.
  • For the one day workshop we felt the blue print (1), storyboard (2) and action plan stages (6) were essential. The prototyping can be done afterwards and separately, although it is a shame to do that of course.

So, we are reviewing and formalising our 1 day workshop model, which may be useful elsewhere. And we are using these approaches for all the courses on our programme, including new and existing courses. And we are very much looking forward to the ELDeR (Edinburgh Learning Design Roadmap).

Q&A

Q1) When you say “all” programmes, do you mean online or on-campus programmes?

A1) Initially the online courses but we have a campus programme that we really want to connect up, to make the courses more blended, so I think it will feed into our on campus courses. A lot of our online tutors teach both online and on campus, so that will also lead some feeding in here.

Q2) How many do you take to the workshop?

A2) You can have quite a few. We’ve had programme director, course leader, learning technologist, critical friends, etc.

A2 – FH) There are no observers in the room for workshops – lots are wanting to understand that. There are no observers in the room, you have to facilitate the learning objectives section very carefully. Too many people is not useful. Everyone has to be trusted, they have to be part of the process. You need a support librarian, the learning technologist has to squarely be part of the design, student, reality checker, QA… I’ve done at most 8 people. In terms of students you need to be able to open and raw…. So, is it OK to have students in the room… Some conversations being had may not be right for that co-creation type idea. Maybe alumni are better in some cases. Some schools don’t have their own learning technologist, so we bring one. Some don’t have a VLE, so we bring one they can play with.

A2 – CM) In the pilot there were 8 in some, but it didn’t feel like too many in the room.

Q3) As a learning technologist have the workshops helped your work?

A3 – CM) Yes, hugely. That action plan really maps out every stage very clearly. Things can come in last minute and all at the same time otherwise, so that is great. And when big things are agreed in the workshop, you can then focus on the details.

A3 – FH) We are trying to show how actually getting this all resolved up front actually saves money and time later on, as everything is agreed.

Q4) Thinking way ahead… People will do great things… So if we have the course all mapped out here, and well agreed, what happens when teams change – how do you capture and communicate this. Should you have a mini reprise of this to revisit it? How does it go over the long term?

A4 – FH) That’s really true. Also if technologist isn’t the one delivering it, that can also be helpful.

A4 – CM) One thing that comes out of this is a CAIeRO planner that can be edited and shared, but yes, maybe you revisit it for future staff…

A4 – FH) Something about ownership of activities, to give the person coming in and feel ownership. And see how it works before and afterwards. Pointing them to document, to output of storyboard, to get ownership. That’s key to facilitation too.

Q4) So, you can revisit activities etc. to achieve Learning outcome…

A4 – FH) That identification of learning outcomes are clear in the storyboards and documents.

Q5) How often do you meet and review programmes? Every 2 years, every 5 years?

A5 – FH) You should review every 5 years for PG.

Comment) We have an annual event, see what’s working and what isn’t and that is very very valuable and helpful. But that’s perhaps unusual.

A5 – FH) That’s the issue of last minute or isolated activities. This process is a good structure for looking at programme and course. Clearly programme has assessment across it so even though we are looking at the course here, it has that consistency. With any luck we can get this stuff embedded in board of studies etc.

A5 – RM) For us doing this process also changed us.

A5 – FH) That report is huge but the universities I looked at these processes are mandatory not optional. But mandatory can make things more about box ticking in some ways…

Learning Design: 6 Months on – Meredith Corey, School of Education 

We are developing a pilot UG course in GeoSciences and Education collaboration, Sustainability and Social Responsibility, running 2016/17. We are 2 online learning educators working from August 2015 to April 2016. This is the first online level 8 course for on-campus students. And there are plans to adapt the course for the wider community – including staff, alumni etc.

So in the three months before the CAIeRO session, we had started looking at existing resources, building a course team, investigating VLEs. The programme is on sustainability. We looked into types of resources and activities. And we had started drafting learning outcomes and topic storyboarding, with support from Louise Connelly who was (then) in IAD.

So the workshop was a 2 day event and we began with the blueprinting. We had similar ideas and very different ways to describe them so, what was very useful for us, was finding common language and ways to describe what we were doing. We didn’t drastically change our learning outcomes, but lots of debate about the wording. Trying to ensure the learning outcomes were appropriate for level 8 SCQF levels, trying not to overload them. And this whole process has helped us focus on our priorities, our vocabulary, the justification and clear purpose.

The remainder of the workshop was spent on storyboarding. We thought we were really organised in terms of content, videos, etc. But actually that storyboarding, after that discussion of priorities, was really useful. Our storyboard generated three huge A0 sheets to understand the content, the ways students would achieve the learning outcomes. It is an online course and there are things you don’t think about but need to consider – how do they navigate the course? How do they find what they need? How do they find what they need? And Fiona and colleagues were great for questioning and probing that.

We did some prototyping but didn’t have time for reality checks – but we have that process lined up for our pilot in the summer. We also took that storyboard and transferred that information to a huge Popplet that allowed us to look at how the feedback and feed forward fits into the course; how we could make that make sense across the course – it’s easy to miss that feedback and feed forward is too late when you are looking week by week.

The key CAIeRO benefits for us were around exploring priorities (and how these may differ for different cohorts); it challenged our assumptions; it formalised our process and this is useful for future projects; focused on all learners and their experience; and really helped us understand our purpose here. And coming soon we shall return to the Popplet to think about the wider community.

Q&A

Q1) I know with one course the head of school was concerned that an online programme might challenge the value of the face to face, or the concern of replacing the face to face course, and how that fits together.

A1) The hope with this course is that the strength is that it brings together students from as many different schools as possible, to really deal with timetabling barriers, to mix students between schools. It would be good if both exists to complement in each others.

A1 – FH) Its not intended as a replacement… In this course’s mission statement for this, it plays up interdisciplinary issues, and that includes use of OERs, reuse, etc. And talking about doing this stuff.

A1) And also the idea is to give students a great online learning experience that means they might go on and do online masters programmes. And hopefully include staff and alumni that also help that mix, that interdisciplinary thing.

Q2) Do you include student expectations in this course? What about student backgrounds?

A2) We have tried to ensure that tutorial groups play to student strengths and interests, making combinations across schools. We are trialling the course with evaluation through very specific questions.

A2 – FH) And there will assessment that asks students to place that learning into their own context, location, etc.

Course Design and your VLE – Ross Ward

I want to talk quickly about how you translate a storyboard into your VLE, in very general terms. Taking your big ideas and making them a course. One thing I like to talk about a lot is user experience – you only need one back experience in Learn or Moodle to really put you off. So you really need to think about ensuring the experience of the VLE and the experience of the course all need to fit together. How you manage or use your VLE is up to do. Once you know what you want to do, you can then pick your technology, fitting your needs. And you’ll need a mix of content, tools, activities, grades, feedback, guidance. If you are an ODL student how you structure that will be very very important, if blended it’s still important. You don’t need your VLE to be a filing cabinet, it can be much more. But it also doesn’t have to be a grand immersive environment, you need it to fit your needs appropriately. And the VLE experience should reflect the overall course experience.

When you have that idea of purpose, you hit the technology and you have kind of a blank canvas. It’s a bit Mona Lisa by numbers… The tools are there but there are easier ways to make your course better. The learning design idea of the storyboard and the user experience of the course context can be very helpful. That is really useful for ensuring students understand what they are doing, creating a digital version of your course, and understanding where you are right now as a student. Arguably a good VLE user experience is one where you could find what you are looking for without any prior knowledge of the course… We get many support calls from those simply looking for information. You may have some pre-requisite stuff, but you need to really make everything easy.

Navigation is key! You need menus. You need context links. You need suggested link. You want to minimise the number of clicks and complexity.

Remember that you should present your material for online, not like a textbook. Use sensible headings. Think about structure. And test it out – ask a colleague, as a student, ask LTW.

And think about consistency – that will help ensure that you can build familiarity with approach, consistently presenting your programme/school brand and look and feel, perhaps also template.

We know this is all important, and we want to provide more opportunity to support that, with examples and resources to draw upon!

Closing Fiona Hale

Huge thanks to Ross for organising today. Huge thanks to our speakers today!

If you are interested in this work do find me at the end, do come talk to me. We have workshops coming up – ELDeR workshop evaluations – and there we’ll talk about design challenges and concerns. That might be learning analytics – and thinking about pace and workshops. For all of these we are addressing particular design challenges – the workshop can concertina to that. There is no rule about how long things take – and whether one day or two days is the number, but sometimes one won’t be enough.

I would say for students it’s worth thinking about sharing the storyboards, the assessment and feedback and reasons for it, so that they understand it.

We go into service in June and July, with facilitators across the schools. Do email me with questions, to offer yourselves as facilitators.

Thank you to all of our University colleagues who took part in this really interesting session!

You can read much more about Edinburgh Learning Design roadmap – and read the full scoping report – on the University of Edinburgh Learning Design Service website. 

Share/Bookmark

A Summer of New Digital Footprints…

It has been a while since I’ve posted something other than a liveblog here but it has been a busy summer so it seems like a good time to share some updates…

A Growing Digital Footprint

Last September I was awarded some University of Edinburgh IS Innovation Fund support to develop a pilot training and consultancy service to build upon the approaches and findings of our recent PTAS-funded Managing Your Digital Footprint research project.

During that University of Edinburgh-wide research and parallel awareness-raising campaign we (my colleague – and Digital Footprint research project PI – Louise Connelly of IAD/Vet School, myself, and colleagues across the University) sought to inform students of the importance of digital tracks and traces in general, particularly around employment and “eProfessionalism”. This included best practice advice around use of social media, personal safety and information security choices, and thoughtful approaches to digital identity and online presences. Throughout the project we were approached by organisations outside of the University for similar training, advice, and consulting around social media best practices and that is how the idea for this pilot service began to take shape.

Over the last few months I have been busy developing the pilot, which has involved getting out and about delivering social media training sessions for clients including NHS Greater Glasgow and Clyde (with Jennifer Jones); for the British HIV Association (BHIVA) with the British Association for Sexual Health and HIV (BASHH) (also with Jennifer Jones); developing a “Making an Impact with your Blog” Know How session for the lovely members of Culture Republic; leading a public engagement session for the very international gang at EuroStemCell, and an “Engaging with the Real World” session for the inspiring postgrads attending the Scottish Graduate School of Social Science Summer School 2016. I have also been commissioned by colleagues in the College of Arts, Humanities and Social Sciences to create an Impact of Social Media session and accompanying resources (the latter of which will continue to develop over time). You can find resources and information from most of these sessions over on my presentations and publications page.

These have been really interesting opportunities and I’m excited to see how this work progresses. If you do have an interest in social media best practice, including advice for your organisation’s social media practice, developing your online profile, or managing your digital footprint, please do get in touch and/or pass on my contact details. I am in the process of writing up the pilot and looking at ways myself and my colleagues can share our expertise and advice in this area.

Adventures in MOOCs and Yik Yak

So, what next?

Well, the Managing Your Digital Footprint team have joined up with colleagues in the Language Technology Group in the School of Informatics for a new project looking at Yik Yak. You can read more about the project, “A Live Pulse: Yik Yak for Understanding Teaching, Learning and Assessment at Edinburgh“, on the Digital Education Research Centre website. We are really excited to explore Yik Yak’s use in more depth as it is one of a range of “anonymous” social networking spaces that appear to be emerging as important alternative spaces for discussion as mainstream social media spaces lose favour/become too well inhabited by extended families, older contacts, etc.

Our core Managing Your Digital Footprint research also continues… I presented a paper, co-written with Louise Connelly, at the European Conference on Social Media 2016 this July on “Students’ Digital Footprints: curation of online presences, privacy and peer supportâ€�. This summer we also hosted visiting scholar Rachel Buchanan of University of Newcastle, Australia who has been leading some very interesting work into digital footprints across Australia. We are very much looking forward to collaborating with Rachel in the future – watch this space!

And, more exciting news: my lovely colleague Louise Connelly (University of Edinburgh Vet School) and I have been developing a Digital Footprint MOOC which will go live later this year. The MOOC will complement our ongoing University of Edinburgh service (run by IAD) and external consultancy word (led by us in EDINA) and You can find out much more about that in this poster, presented at the European Conference on Social Media 2016, earlier this month…

Preview of Digital Footprint MOOC Poster

Alternatively, you could join me for my Cabaret of Dangerous Ideas 2016 show….

Cabaret of Dangerous Ideas 2016 - If I Googled You, What Would I Find? Poster

The Cabaret of Dangerous Ideas runs throughout the Edinburgh Fringe Festival but every performance is different! Each day academics and researchers share their work by proposing a dangerous idea, a provocative question, or a challenge, and the audience are invited to respond, discuss, ask difficult questions, etc. It’s a really fun show to see and to be part of – I’ve now been fortunate enough to be involved each year since it started in 2013. You can see a short video on #codi2016 here:

In this year’s show I’ll be talking about some of those core ideas around managing your digital footprint, understanding your online tracks and traces, and reflecting on the type of identity you want to portray online. You can find out more about my show, If I Googled You What Would I Find, in my recent “25 Days of CODI” blog post:

25 Days of CoDI: Day 18

You’ll also find a short promo film for the series of data, identity, and surveillance shows at #codi2016 here:

So… A very busy summer of social media, digital footprints, and exciting new opportunities. Do look out for more news on the MOOC, the YikYak work and the Digital Footprint Training and Consultancy service over the coming weeks and months. And, if you are in Edinburgh this summer, I hope to see you on the 21st at the Stand in the Square!

 

Share/Bookmark

A Mini Adventure to Repository Fringe 2016

After 6 years of being Repository Fringe‘s resident live blogger this was the first year that I haven’t been part of the organisation or amplification in any official capacity. From what I’ve seen though my colleagues from EDINA, University of Edinburgh Library, and the DCC did an awesome job of putting together a really interesting programme for the 2016 edition of RepoFringe, attracting a big and diverse audience.

Whilst I was mainly participating through reading the tweets to #rfringe16, I couldn’t quite keep away!

Pauline Ward at Repository Fringe 2016

Pauline Ward at Repository Fringe 2016

This year’s chair, Pauline Ward, asked me to be part of the Unleashing Data session on Tuesday 2nd August. The session was a “World Cafe” format and I was asked to help facilitate discussion around the question: “How can the respository community use crowd-sourcing (e.g. Citizen Science) to engage the public in reuse of data?” – so I was along wearing my COBWEB: Citizen Observatory Web and social media hats. My session also benefited from what I gather was an excellent talk on “The Social Life of Data” earlier in the event from the Erinma Ochu (who, although I missed her this time, is always involved in really interesting projects including several fab citizen science initiatives).

 

I won’t attempt to reflect on all of the discussions during the Unleashing Data Session here – I know that Pauline will be reporting back from the session to Repository Fringe 2016 participants shortly – but I thought I would share a few pictures of our notes, capturing some of the ideas and discussions that came out of the various groups visiting this question throughout the session. Click the image to view a larger version. Questions or clarifications are welcome – just leave me a comment here on the blog.

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

Notes from the Unleashing Data session at Repository Fringe 2016

 

If you are interested in finding out more about crowd sourcing and citizen science in general then there are a couple of resources that made be helpful (plus many more resources and articles if you leave a comment/drop me an email with your particular interests).

This June I chaired the “Crowd-Sourcing Data and Citizen Science” breakout session for the Flooding and Coastal Erosion Risk Management Network (FCERM.NET) Annual Assembly in Newcastle. The short slide set created for that workshop gives a brief overview of some of the challenges and considerations in setting up and running citizen science projects:

Last October the CSCS Network interviewed me on developing and running Citizen Science projects for their website – the interview brings together some general thoughts as well as specific comment on the COBWEB experience:

After the Unleashing Data session I was also able to stick around for Stuart Lewis’ closing keynote. Stuart has been working at Edinburgh University since 2012 but is moving on soon to the National Library of Scotland so this was a lovely chance to get some of his reflections and predictions as he prepares to make that move. And to include quite a lot of fun references to The Secret Diary of Adrian Mole aged 13 ¾. (Before his talk Stuart had also snuck some boxes of sweets under some of the tables around the room – a popularity tactic I’m noting for future talks!)

So, my liveblog notes from Stuart’s talk (slightly tidied up but corrections are, of course, welcomed) follow. Because old Repofringe live blogging habits are hard to kick!

The Secret Diary of a Repository aged 13 ¾ – Stuart Lewis

I’m going to talk about our bread and butter – the institutional repository… Now my inspiration is Adrian Mole… Why? Well we have a bunch of teenage repositories… EPrints is 15 1/2; Fedora is 13 ½; DSpace is 13 ¾.

Now Adrian Mole is a teenager – you can read about him on Wikipedia [note to fellow Wikipedia contributors: this, and most of the other Adrian Mole-related pages could use some major work!]. You see him quoted in two conferences to my amazement! And there are also some Scotland and Edinburgh entries in there too… Brought a haggis… Goes to Glasgow at 11am… and says he encounters 27 drunks in one hour…

Stuart Lewis at Repository Fringe 2016

Stuart Lewis illustrates the teenage birth dates of three of the major repository softwares as captured in (perhaps less well-aged) pop hits of the day.

So, I have four points to make about how repositories are like/unlike teenagers…

The thing about teenagers… People complain about them… They can be expensive, they can be awkward, they aren’t always self aware… Eventually though they usually become useful members of society. So, is that true of repositories? Well ERA, one of our repositories has gotten bigger and bigger – over 18k items… and over 10k paper thesis currently being digitized…

Now teenagers also start to look around… Pandora!

I’m going to call Pandora the CRIS… And we’ve all kind of overlooked their commercial background because we are in love with them…!

Stuart Lewis at Repository Fringe 2016

Stuart Lewis captures the eternal optimism – both around Mole’s love of Pandora, and our love of the (commercial) CRIS.

Now, we have PURE at Edinburgh which also powers Edinburgh Research Explorer. When you looked at repositories a few years ago, it was a bit like Freshers Week… The three questions were: where are you from; what repository platform do you use; how many items do you have? But that’s moved on. We now have around 80% of our outputs in the repository within the REF compliance (3 months of Acceptance)… And that’s a huge change – volumes of materials are open access very promptly.

So,

1. We need to celebrate our success

But are our successes as positive as they could be?

Repositories continue to develop. We’ve heard good things about new developments. But how do repositories demonstrate value – and how do we compare to other areas of librarianship.

Other library domains use different numbers. We can use these to give comparative figures. How do we compare to publishers for cost? Whats our CPU (Cost Per Use)? And what is a good CPU? £10, £5, £0.46… But how easy is it to calculate – are repositories expensive? That’s a “to do” – to take the cost to run/IRUS cost. I would expect it to be lower than publishers, but I’d like to do that calculation.

The other side of this is to become more self-aware… Can we gather new numbers? We only tend to look at deposit and use from our own repositories… What about our own local consumption of OA (the reverse)?

Working within new e-resource infrastructure – http://doai.io/ – lets us see where open versions are available. And we can integrate with OpenURL resolvers to see how much of our usage can be fulfilled.

2. Our repositories must continue to grow up

Do we have double standards?

Hopefully you are all aware of the UK Text and Data Mining Copyright Exception that came out from 1st June 2014. We have massive massive access to electronic resources as universities, and can text and data mine those.

Some do a good job here – Gale Cengage Historic British Newspapers: additional payment to buy all the data (images + XML text) on hard drives for local use. Working with local informatics LTG staff to (geo)parse the data.

Some are not so good – basic APIs allow only simple searchers… But not complex queries (e.g. could use a search term, but not e.g. sentiment).

And many publishers do nothing at all….

So we are working with publishers to encourage and highlight the potential.

But what about our content? Our repositories are open, with extracted full-text, data can be harvested… Sufficient but is it ideal? Why not do bulk download from one click… You can – for example – download all of Wikipedia (if you want to).  We should be able to do that with our repositories.

3. We need to get our house in order for Text and Data Mining

When will we be finished though? Depends on what we do with open access? What should we be doing with OA? Where do we want to get to? Right now we have mandates so it’s easy – green and gold. With gold there is PURE or Hybrid… Mixed views on Hybrid. Can also publish locally for free. Then for gree there is local or disciplinary repositories… For Gold – Pure, Hybrid, Local we pay APCs (some local option is free)… In Hybrid we can do offsetting, discounted subscriptions, voucher schemes too. And for green we have UK Scholarly Communications License (Harvard)…

But which of these forms of OA are best?! Is choice always a great thing?

We still have outstanding OA issues. Is a mixed-modal approach OK, or should we choose a single route? Which one? What role will repositories play? What is the ultimate aim of Open Access? Is it “just� access?

How and where do we have these conversations? We need academics, repository managers, librarians, publishers to all come together to do this.

4. Do we now what a grown-up repository look like? What part does it play?

Please remember to celebrate your repositories – we are in a fantastic place, making a real difference. But they need to continue to grow up. There is work to do with text and data mining… And we have more to do… To be a grown up, to be in the right sort of environment, etc.

 

Q&A

Q1) I can remember giving my first talk on repositories in 2010… When it comes to OA I think we need to think about what is cost effective, what is sustainable, why are we doing it and what’s the cost?

A1) I think in some ways that’s about what repositories are versus publishers… Right now we are essentially replicating them… And maybe that isn’t the way to approach this.

And with that Repository Fringe 2016 drew to a close. I am sure others will have already blogged their experiences and comments on the event. Do have a look at the Repository Fringe website and at #rfringe16 for more comments, shared blog posts, and resources from the sessions. 

Share/Bookmark

The difference between human and posthuman learning – Prof. Catherine Hasse, Aarhus University – Belated LiveBlog

On 27th June I attended a lunchtime seminar, hosted by the University of Edinburgh Centre for Research in Digital Education with Professor Catherine Hasse of Aarhus University

Catherine is opening with a still from Ex-machina (2015, dir. Alex Garland). The title of my talk is the difference between human and posthuman learning, I’ll talk for a while but I’ve moved a bit from my title… My studies in posthuman learning has moved me to more of a posthumanistic learning… Today human beings are capable of many things – we can transform ourselves, and ourselves in our environment. We have to think about that and discuss that, to take account of that in learning.

I come from the centre for Future Technology, Culture and Learning, Aarhus University, Denmark. We are hugely interdisciplinary as a team. We discuss and research what is learning under these new conditions, and to consider the implications for education. I’ll talk less about education today, more about the type of learning taking place and the ways we can address that.

My own background is in anthropology of education in Denmark, specifically looking at physicists.In 2015 we got a big grant to work on “The Technucation Project” and we looked at the anthropology of education in Denmark in nurses and teachers – and the types of technological literacy they require for their work. My work (in English) has been about “Mattering” – the learning changes that matter to you. The learning theories I am interested in acknowledge cultural differences in learning, something we have to take account of. What it is to be human is already transformed. Posthumanistics learning is a new conceptualisations and material conditions that change what it was to be human. It was and it ultra human to be learners.

So… I have become interested in robots.. They are coming into our lives. They are not just tools. Human beings encounter tools that they haven’t asked for. You will be aware of predictions that over a third of jobs in the US may be taken over by automated processes and robots in the next 20 years. That comes at the same time as there is pressure on the human body to become different, at the point at which our material conditions are changing very rapidly. A lot of theorists are picking up on this moment of change, and engaging with the idea of what it is to be human – including those in Science and Technology Studies, and feminist critique. Some anthropologist suggest that it is not geography but humans that should shape our conceptions of the world (Anthrpos- Anthropocene), others differ and conceive of the capitalocene. When we talk about the posthuman a lot of the theories acknowledge that we can’t talk about the fact that we can’t think of the human in the same way anymore. Kirksey & Helmreich (2010) talk of “natural-cultural hybrids”, and we see everything from heart valves to sensors, to iris scanning… We are seeing robots, cybords, amalgamations, including how our thinking feeds into systems – like the stockmarkets (especially today!). The human is de-centered in this amalgamation but is still there. And we may yet get to this creature from Ex-machina, the complex sentient robot/cyborg.

We see posthuman learning in uncanny valley… gradually we will move from robots that feel far away, to those with human tissues, with something more human and blended. The new materialism and robotics together challenge the conception of the human. When we talk of learning we talk about how humans learn, not what follows when bodies are transformed by other (machine) bodies. And here we have to be aware that in feminism that people like Rosa Predosi(?) have been happy with the discarding of the human: for them it was always a narrative, it was never really there. The feminist critique is that the “human” was really retruvian man.. But they also critique the idea that Posthu-man is a continuation of individual goal-directed and rational self-enhancing (white male) humans. And that questions the post human…

There are actually two ways to think of the post human. One way is the posthuman learning as something that does away with useless, biological bodies (Kurzweil 2005) and we see transhumanists, Verner Vinge, Hans Moravec, Natasha Vita-More in this space that sees us heading towards the singularity. But the alternative is a posthumanistic approach, which is about cultural transformations of boundaries in human-material assemblages, referencing that we have never been isolated human beings, we’ve always been part of our surroundings. That is another way to see the posthuman. This is a case that I make in an article (Hayles 1999) that we have always been posthuman. We also see have, on the other hand, Spinozists approach which is about how are we, if we understand ourselves as de-centered, able to see ourselves as agents. In other words we are not separate from the culture, we are all Nature-cultural…Not of nature, not of culture but naturacultural (Hayles; Haraway).

But at the same time if it is true that human beings can literally shape the crust of the earth, we are now witnessing anthropomorphism on steroids (Latour, 2011 – Waiting for Gaia [PDF]). The Anthropocene perspective is that, if human impact on Earth can be translated into human responsibility fr the earth, the concept may help stimulate appropriate societal responses and/or invoke appropriate planetary stewardship (Head 2014); the capitalocene (see Jason Moore) talks about moving away from cartesian dualism in global environmental change, the alternative implies a shift from humanity and nature to humanity in nature, we have to counter capitalism in nature.

So from the human to the posthuman, I have argue that this is a way we can go with our theories… There are two ways to understand that, the singularist posthumanism or spinozist posthumanism. And I think we need to take a posthumanistic stance with learning – taking account of learning in technological naturecultures.

My own take here… We talk about intra-species differentiations. This nature is not nature as resource but rather nature as matrices – a nature that operates not only outside and inside our bodies (from global climate to the microbiome) but also through our bodies, including embodied minds. We do create intra-species differentiation, where learning changes what maters to you and others, and what matters changes learning. To create an ecological responsible ultra-sociality we need to see ourselves as a species of normative learners in cultural organisations.

So, my own experience, after studying physicists as an anthropologists I no longer saw the night sky the same way – they were stars and star constellations. After that work I saw them as thousands of potetial suns – and perhaps planets – and that wasn’t a wider discussion at that time.

I see it as a human thing to be learners. And we are ultra social learning. And that is a characteristic of being human. Collective learning is essentially what has made us culturally diverse. We have learning theories that are relavent for cultural diversity. We have to think of learning in a cultural way. Mediational approachs in collective activity. Vygotsky takes the idea of learners as social learners before we become personal learners and that is about the mediation – not natureculture but cultureculture (Moll 2000). That’s my take on it. So, we can re-centre human beings… Humans are not the centre of the universe, or of the environment. But we can be at the centre and think about what we want to be, what we want to become.

I was thinking of coming in with a critique of MOOCs, particularly as those being a capitolocene position. But I think we need to think of social learning before we look at individual learning (Vygotsky 1981). And we are always materially based. So, how do we learn to be engaged collectively? What does it matter – for MOOCs for instance – if we each take part from very different environments and contexts, when that environment has a significant impact. We can talk about those environments and what impact they have.

You can buy robots now that can be programmed – essentially sex robots like “Roxxxy” – and are programmed by reactions to our actions, emotions etc. If we learn from those actions and emotions, we may relearn and be changed in our own actions and emptions. We are seeing a separation of tool-creation from user-demand in Capitalocene. The introduction of robots in work places are often not replacing the work that workers actually want support with. The seal robots to calm dementia patients down cover a role that many carers actually enjoyed in their work, the human contact and suport. But those introducing them spoke of efficiency, the idea being to make employees superfluous but described as “simply an attempt to remove some of the most demeaning hard task from the work with old people so the wor time ca be used for care and attention” (Hasse 2013).

These alternative relations with machines are things we always react too, humans always stretch themselves to meet the challenge or engagement at hand. An inferentialist approach (Derry 2013) acknowledges many roads to knowledge but materiality of thinking reflects that we live in a world of not just case but reason. We don’t live in just a representationalism (Bakker and Derry 2011) paradigm, it is much more complex. Material wealth will teach us new things.. But maybe these machines will encourage us to think we should learn more in a representative than an inferentialist way. We have to challenge robotic space of reasons. I would recommend Jan Derry’s work on Vygotsky in this area.

For me robot representationalism has the capacity to make convincing representations… You can give and take answers but you can’t argue space and reasons… They cannot reason from this representation. Representational content is not articulated by determinate negation and complex concept formation. Algorithmic learning has potential and limitations, and is based on representationalism. Not concept formation. I think we have to take a position on posthumanistic learning, with collectivity as a normative space of reasons; acknowledge mattering matter in concept formation; acknowledge human inferentialism; acknowledge transformation in environment…

Discussion/Q&A

Q1) Can I ask about causes and reasons… My background is psychology and I could argue that we are more automated than we think we are, that reasons come later…

A1) Inferentialism is challenging  the idea of giving and taking reasons as part of normative space. It’s not anything goes… It’s sort of narrowing it down, that humans come into being in terms of learning and thinking in a normative space that is already there. Wilfred Sellers says there is no “bare given” – we are in a normative space, it’s not nature doing this… I have some problems with the term dialectical… But it is a kind of dialective process. If you give an dtake reasons, its not anything goes. I think Jen Derry has a better phrasing for this. But that is the basic sense. And it comes for me from analytical philosophy – which I’m not a huge fan of – but they are asking important questions on what it is to be human, and what it is to learn.

Q2) Interesting to hear you talk about Jan Derry. She talks about technology perhaps obscuring some of the reasoning process and I was wondering how representational things fitted in?

A2) Not in the book I mentioned but she has been working on this type of area at University of London. It is part of the idea of not needing to learn representational knowledge, which is built into technological systems, but for inferentialism we need really good teachers. She has examples about learning about the bible, she followed a school class… Who look at the bible, understand the 10 commandments, and then ask them to write their own bible 10 commandments on whatever topic… That’s a very narrow reasoning… It is engaging but it is limited.

Q3) An ethics issue… If we could devise robots or machines, AI, that could think inferentially, should we?

A3) A challenge for me – we don’t have enough technical people. My understanding is that it’s virtually impossible to do that. You have claims but the capacities of AI systems so far are so limited in terms of function. I think that “theory of mind” is so problematic. They deteriorise what it means to be human, and narrow what it means to be our species. I think algorithmic learning is representational… I may be wrong though… If we can… There are poiltical issues. Why make machines that are one to one to human beings… Maybe to be slaves, to do dirty work. If they can think inferentiality, should they not have ethical rights. In spinostas we have a responsibility to think about those ethical issues.

Q4) You use the word robot, that term is being used to be something very embodies and physical.. But algorithmic agency, much less embodied and much less visible – you mentioned the stock market – and how that fits in.

A4) In a way robots are a novelty, a way to demonstrate that. A chatbot is also a robot. Robot covers a lot of automated processes. One of the things that came out of AI at one point was that AI couldn’t learn without bodies.. That for deep learning there needs to be some sort of bodily engagement to make bodily mistakes. But then encounters like Roxy and others is that they become very much better… As humans we stretch to engage with these robots… We take an answer for an answer, not just an algorithm, and that might change how we learn.

Q4) So the robot is a point of engaging for machine learning… A provocation.

A4) I think roboticists see this as being an easy way to make this happen. But everything happens so quickly… Chips in bodies etc. But can also have robots moving in space, engaging with chips.

Q5) Is there something here about artifical life, rather than artifical intelligence – that the robot provokes that…

A5) That is what a lot of roboticists work at, is trying to create artificial life… There is a lot of work we haven’t seen yet. Working on learning algorithms in computer programming now, that evolves with the process, a form of artifical life. They hope to create robots and if they malfunction, they can self-repair so that the next generation is better. We asked at a conference in Prague recently, with roboticists, was “what do you mean by better?” and they simply couldn’t answer that, which was really interesting… I do think they are working on artifical life as well. And maybe there are two little connections between those of us in education, and those that create these things.

Q6) I was approached by robotics folks about teaching robots to learn drawing with charcoal, largely because the robotic hand had enough sensitivity to do something quite complex – to teach charcoal drawing and representation… The teacher gesticulates, uses metaphor, describes things… I teach drawing and representational drawing… There is no right answer there, which is tough for robototics… What is the equivelent cyborg/dual space in learning? Drawing toolsa re cyborg-esque in terms of digital and drawing tools… BUt also that diea of culture… You can manipulate tools, awareness of function and then the hack, and complexity of that hack… I suppose lots of things were ringing true but I couldn’t quite stick them in to what I’m trying to get at…

A6) Some of this is maybe tied to Schuman Enhancement Theory – the idea of a perfect cyborg drawing?

Q6) No, they were interested in improving computer learning, and language, but for me… The idea of human creativity and hacking… You could pack a robot with the history of art, and representation, so much information… Could do a lot… But is that better art? Or better design? A conversation we have to have!

A6) I tend to look at the dark side of the coin in a way… Not because I am techno-determinist… I do love gadgets, technology enhances our life, we can be playful… BUt in the capitalocene… There is much more focus on this. The creative side of technology is what many people are working on… Fantastic things are coming up, crossovers in art… New things can be created… What I see in nursing and teaching learning contexts is how to avoid engaging… So lifting robots are here, but nursing staff aren’t trained properly and they avoid them… Creativity goes many ways… I’m seeing from quite a particular position, and that is partly a position of warning. These technologies may be creative and they may then make us less and less creative… That’s a question we have to ask. For physicists, who have to be creative, are always so tied to the materiality, the machines and technologies in their working environments. I’ve also seen some of these drawing programmes…. It is amazing what you can draw with these tools… But you need purpose, awareness of what those changes mean… Tools are never innocent. We have to analyse what tools are doing to us

Share/Bookmark

If you give a historian code: Adventures in Digital Humanities – Jean Bauer Seminar LiveBlog

This afternoon I’m at UCL for the “If you give a historian code: Adventures in Digital Humanities” seminar from Jean Bauer of Princeton University, who is being hosted by Melissa Terras of the UCL Centre for Digital Humanities. I’ll be liveblogging so, as usual, any corrections and additions are very much welcomed. 

Melissa is introducing Jean, who is in London en route to DH 2016 in Krakow next week. Over to Jean:

I’m delighted to be here with all of the wonderful work Melissa has been doing here. I’m going to talk a bit about how I got into digital humanities, but also about how scholars in library and information sciences, and scholars in other areas of the humanities might find these approaches useful.

So, this image is by Benjamin West, the Treaty of Paris, 1783. This is the era that I research and what I am interested in. In particular I am interested in John Adam, the first minister of the United States – he even gets one line in Hamilton: the musical. He’s really interested as he was very concerned with getting thinking and processes on paper. And on the work he did with Europe, where there hadn’t really been American foreign consuls before. And he was also working on areas of the North America, making changes that locked the British out of particular trading blocks through adjustments brought about by that peace treaty – and I might add that this is a weird time to give this talk in England!

Now, the foreign service at this time kind of lost contact once they reached Europe and left the US. So the correspondence is really important and useful to understand these changes. There are only 12 diplomats in Europe from 1775-1788, but that grows and grows with consuls and diplomats increasing steadily. And most of those consuls are unpaid as the US had no money to support them. When people talk about the diplomats of this time they tend to focus on future presidents etc. and I was interested in this much wider group of consuls and diplomats. So I had a dataset of letters, sent to John Jay, as he was negotiating the treaty. To use that I needed to put this into some sort of data structure – so, this is it. And this is essentially the world of 1820 as expressed in code. So we have locations, residences, assignments, letters, people, etc. Within that data structure we have letters – sent to or from individuals, to or from locations, they have dates assigned to them. And there are linkages here. Databases don’t handle fuzzy dates well, and I don’t want invalid dates, so I have a Boolean logic here. And also a process for handling enclosures – right now that’s letters but people did enclose books, shoes, statuettes – all sorts of things! And when you look at locations these connect to “in states” and states and location information… This data set occurs within the Napoleonic wars so none of the boundaries are stable in these times so the same location shifts in meaning/state depending on the date.

So, John Jay has all this correspondence between May 27 and Nov 19, 1794 and they are going from Europe to North America, and between the West Indies and North America. Many of these are reporting on trouble. The West Indies are ship siezures… And there are debts to Britain… And none of these issues get resolved in that treaty. Instread John Jay and Lord Granville set up a series of committees – and this is the historical precident for mediation. Which is why I was keen to understand what information John Jay had available. None of this correspondance got to him early enough in time. There wasn’t information there to resolve the issue, but enough to understand it. But there were delays for safety, for practical issues – the State Department was 6 people at this time – but the information was being collected in Philadephia. So you have a centre collecting data from across the continent, but not able to push it out quickly enough…

And if you look at the people in these letters you see John Jay, and you see Edmund Jennings Randolph mentions most regularly. So, I have this elaborate database and lots of ways to visualise this… Which enables us to see connections, linkages, and places where different comparisons highlight different areas of interest. And this is one of the reasons I got into the Humanities. There are all these papers – usually for famous historical men – and they get digitised, also the enclosures… In a single file(!), parsing that with a partial typescript, you start to see patterns. You see not summaries of information being shared, not aggregation and analysis, but the letters being bundled up and sent off – like a repeater note. So, building up all of this stuff… Letters are objects, they have relationships to each others, they move across space and time. You look at the papers of John Adams, or of any political leader, and they are just in order of date sent… Requiring us to flip back and forth. Databases and networks allow us to follow those conversations, to understand new orders to read those letters in.

Now, I had a background in code before I was a graduate student. What I do now at Princton is to work with librarians and students to build new projects. We use a lot of relational databases, and network analysis… And that means a student like one I have at the moment can have a fully described, fully structured data set on a vagrant machine that she can engage with, query, analysise, and convey to her examiners etc. Now this student was an excel junky but approaching the data as a database allows us to structure the data, to think about information, the nature of sources and citation practices, and also to get major demographic data on her group and the things she’s working on.

Another thing we do at Princton is to work with libraries and with catalogue data – thinking about data in MARC, MODS, or METALTA record, and thinking about the extract and reformatting of that data to query and rethink that data. And we work with librarians on information retrieval, and how that could be translated to research – book history perhaps. Princeton University library brought th epersonal library of philosopher Jaques Derrida – close to 19,000 volumes (thought it was about 15,000 until they were unpacked), so two projects are happening simultaniously. One is at the Centre for Digital Humanities, looking at how Derrida marked up the texts, and then went on to use and cite in On Grammatology. The other is with BibFrame – a Linked Open Data standard for library catalogues, and they are looking at books sent to Derrida, with dedications to him. Now there won’t be much overlap of those projects just now – On Grammatology was his first book so those dedicated/gifted books to him. But we are building our databases for both projects as Linked Open Data, all being added a book at a time, so the hope is that we’ll be able to look at any relationships between the books that he owned and the way that he was using and being gifted items. And this is an experiment to explore those connections, and to expose that via library catalogue… But the library wants to catalogue all works, not just those with research interest. And it can be hard to connect research work, with depth and challenge, back to the catalogue but that’s what we are trying to do. And we want to be able to encourage more use and access to the works, without the library having to stand behind the work or analyse the work of a particular scholar.

So, you can take a data structure like this, then set up your system with appropriate constraints and affordances that need to be thought about as they will shape what you can and will do with your data later on. Continents have particular locations, boundaries, shape files. But you can’t mark out the boundaries for empires and states. The Western boundary at this time is a very contested thing indeed. In my system states are merely groups of locations, so that I can follow mercantile power, and think from a political viewpoint. But I wanted a tool with broader use hence that other data. Locations seem very safe and neutral but they really are not, they are complex and disputed. Now for that reason I wanted this tool – Project Quincy – to have others using it, but that hasn’t happened yet… Because this was very much created for my research and research question…It’s my own little Mind Palace for my needs… But I have heard from a researcher looking to catalogue those letters, and that would be very useful. Systems like this can have interesting afterlives, even if they don’t have the uptake we want Open Source Digital Humanities tools to have. The biggest impact of this project has been that I have the schema online. Some people do use the American Foreign Correspondents databases – I am one of the few places you can find this information, especially about consuls. But that schema being shared online have been helping others to make their own system… In that sense the more open documentation we can do, the better all of our projects could be.

I also created those diagrams that you were seeing – a programme that creates these allows you to create easy to read, easy to follow, annotated, colour coded visuals. They are prettier than most database diagrams. I hope that when documentation is appealing and more transparant,  that that will get used more… That additional step to help people understand what you’ve made available for them… And you can use documentation to help teach someone how to make a project. So when my student was creating her schema, it was an example I could share or reference. Having something more designed was very helpful.

Q&A

Q1) Can you say more about the Derrida project and that holy grail of hanging that other stuff on the catalogue record?

A1) So the BibFrame schema is not as flexible as you’d like, it’s based on MARC, but it’s Linked Open Data, it can be expressed in RDF or JSON… And that lets us link records up. And we are working in the same library so we can link up on people, locations, maybe also major terms, and on th eaccession id number too. We haven’t tried it yet but…

Q1) And how do you make the distinction between authoritative record and other data.

A1) Jill Benson(?) team are creating authoritative linked open data records for all of the catalogue. And we are creating Linked Open Data, we’ll put it in a relational database with an API and an endpoint to query to generate that data. Once we have something we’ll look at offering a Triple Store on an ongoing basis. So, basically it is two independent data structures growing side by side with an awareness of each other. You can connect via API but we are also hoping for a demo of the Derrida library in BibFrame in the next year or two. At least a couple of the books there will be annotated, so you can see data from under the catalogue.

Q1) What about the commentary or research outputs from that…

A1) So, once we have our data, we’ll make a link to the catalogue and pull in from the researcher system. The link back to the catalogue is the harder bit.

Q2) I had a suggestion for a geographic system you might be interested in called Pelagios… And I don’t know if you could feed into that – it maps historical locations, fictional locations etc.

A2) There is a historical location atlas held by Newbury so there are shapefiles. Last I looked at Pelagios it was concerned more with the ancient world.

Comment) Latest iteration of funding takes it to Medieval and Arabic… It’s getting closer to your period.

A2) One thing that I really like about Pelagios is that they have split locations from their name, which accommodates multiple names, multiple imaginings and understandings etc. It’s a really neat data model. My model is more of a hack together – so in mine “London” is at the centre of modern London… Doesn’t make much sense for London but I do similar for Paris, that probably makes more sense. So you could go in deeper… There was a time when I was really interested in where all of Jay’s London Correspondents were… That was what put me into thinking about networking analysis… 60 letters are within London alone. I thought about disambiguating it more… But I was more interested in the people. So I went down a Royal Mail in London 1794 rabbit hole… And that was interesting, thinking about letters as a unit of information… Diplomatic notes fix conversations into a piece of paper you can refer to later – capturing the information and decisions. They go back and forth… So the ways letters came and went across London – sometimes several per day, sometimes over a week within the city…. is really interesting… London was and is extremely complicated.

Q3) I was going to ask about different letters. Those letters in London sound more like memos than a letter. But the others being sent are more precarious, at more time delay… My background is classics so there you tend to see a single letter – and you’d commission someone like Cicero to write a letter to you to stick up somewhere – but these letters are part of a conversation… So what is the difference in these transatlantic letters?

A3) There are lots of letters. I treat letters capaciously… If there is a “to” or “from” it’s in. So there are diplomatic notes between John Jay and George Hammond – a minister not an ambassadors as the US didn’t warrant that. Hammond was bad at his job – he saw a war coming and therefore didn’t see value in negotiating. They exchange notes, forward conversations back and forth. My data set for my research was all the letters sent to Jay, not those sent by Jay. I wanted to see what information Jay had available. With Hammond he kept a copy of all his letters to Jay, as evidence for very petty disputes. The letters from the West Indies were from Nathanial Cabbot Dickinson, who was sent as an information collector for the US government. Jay was sent to Europe on the treaty…. So the kick off for Jay’s treaty is changes that sees food supplies to British West Indies being stopped. Hammond actually couldn’t find a ship to take evidence against admiralty courts… They had to go through Philadelphia, then through London. So that cluster of letters include older letters. Letters from the coast include complaints from Angry American consuls…. There are urgent cries for help from the US. There is every possible genre… One of the things I love about American history is that Jay needs all the information he can get. When you map letters – like the Republic of Letters project at Stanford – you have this issue of someone writing to their tailor, not just important political texts. But for diplomats all information matters… Now you could say that a letter to a tailor is important but you could also say you are looking to map the boundaries of intellectual history here… Now in my system I map duplicates sent transatlantically, as those really matter, not all arrived, etc. I don’t map duplicates within London, as that isn’t as notable and is more about after the fact archiving.

Q4) Did John Jay keep diaries that put this correspondance in context?

A4) He did keep diaries… I do have analysis of how John Quincy Adams wrote letters in his time. He created subject headings, he analysed them, he recreated a filing system and way of managing his letters – he’d docket his letters, noting date received. He was like a human database… Hence naming my database after him.

Q5) There are a couple of different types of a tool like this. There is your use and then there is reuse of the engineering. I have correspondance earlier than Jay’s, mainly centred on London… Could I download the system and input my own letters?

A5) Yes, if you go to eafsd.org you’ll find more information there and you can try out the system. The database is Project Quincy and that’s on GitHub (GPL 3.0) and you can fire it up in Django. It comes with a nice interface. And do get in touch and I’ll update you on the system etc. It runs in the Django framework, can use any database underneath it. And there may be a smaller tractable letter database running underneath it.

Comment) On BibFrame… We have a Library and Information Studies programme which we teach BibFrame as part of that. We set up a project with a teaching tool which is also on GitHub – its linked from my staff page.

DO you think any system can be generic reused?

Have you submitted this to JORS

Share/Bookmark

Flood and Coastal Erosion Risk Management Network (FCERM.net) 2016 Annual Assembly Liveblog

Today I am at theFlood and Coastal Erosion Risk Management Network (FCERM.net) 2016 Annual Assembly in Newcastle. The event brings together a really wide range of stakeholders engaged in flood risk management. I’m here to talk about crowd sourcing and citizen science, with both COBWEB and University of Edinburgh CSCS Network member hats on, as the event is focusing on future approaches to managing flood risk and of course citizen science offers some really interesting potential here. 

I’m going to be liveblogging today but as the core flooding focus of the day is not my usual subject area I particularly welcome any corrections, additions, etc. 

The first section of the day is set up as: Future-Thinking in Flood Risk Management:

Welcome by Prof Garry Pender

Prof Hayley Fowler, Professor of Climate Change Impacts, Newcastle University – An Uncertain Future: Climate, Weather and Flooding

Phil Younge, Environment Agency – The Future of Flood Risk Management

The next section of the day looks at: Research into Practice – Lessons from Industry:

David Wilkes – Global Flood Resilience, Arup – Engineering Future Cities, Blue-Green Infrastructure

Stephen Garvin, Director Global Resilience Centre, BRE – Adapting to change – multiple events and FRM

Jaap Flikweert – Flood and Coastal Management Advisor, Royal HaskoningDHV – Resilience and adaptation: coastal management for the future

Sharing Best Practice – Just 2-minutes – Mini presentations from delegates sharing output, experience and best practice

I will be taking some notes in this session, but I am also presenting a 2 minute session from my COBWEB colleague Barry Evans (Aberystwyth University), on our co-design work and research associated with our collaboration with the Tal-y-bont Floodees in Mid-Wales.

At this point in the day we move to the Parallel Breakout sessions on Tools for the Future. I am leading Workshop 1 on crowd sourcing so won’t be blogging them, but include their titles here for reference:

  • Workshop 1 – Crowd-Sourcing Data and Citizen Science An exploration of tools used to source environmental data from the public led by Nicola Osborne CSCS Network with case studies from SEPA
  • Workshop 2 – Multi-event modelling for resilience in urban planning An introduction to tools for simulating multiple storm events with consideration of the impacts on planning in urban environments with case studies from BRE and Scottish Government
  • Workshop 3 – Building Resilient Communities Best-practice guidance on engaging with communities to build resilience, led by Dr Esther Carmen with case studies from the SESAME project

We finish the day with a session on Filling the Gaps– Future Projects:

Breakout time for discussion around future needs and projects

Feedback from groups 

Final Thoughts from FCERM.net – Prof. Garry Pender 

Share/Bookmark

elearning@ed/LTW Monthly Meet Up: Assessment and Feedback LiveBlog

This afternoon I’m at the eLearning@ed/LTW monthly Showcase and Network event, which this month focuses on Assessment and Feedback.
I am liveblogging these notes so, as usual, corrections and updates are welcomed. 
The wiki page for this event includes the agenda and will include any further notes etc.: https://www.wiki.ed.ac.uk/x/kc5uEg
Introduction and Updates, Robert Chmielewski (IS Learning, Teaching and Web)
Robert consults around the University on online assessment – and there is a lot of online assessment taking place. And this is an area that is supported by everybody. Students are interested in submitting and receiving feedback online, but we also have technologists who recognise the advantages of online assessment and feedback, and we have the University as a whole seeing the benefits around, e.g. clarity over meeting timelines for feedback. The last group here is the markers and they are more and more appreciative of the affordances of online assessment and feedback. So there are a lot of people who support this, but there are challenges too. So, today we have an event to share experiences across areas, across levels.
Before we kick off I wanted to welcome Celeste Houghton. Celeste: I an the new Head of Academic Development for Digital Education at the University, based at IAD, and I’m keen to meet people, to find out more about what is taking place. Do get in touch.
eSubmission and eFeedback in the College of Humanities and Social Science, Karen Howie (School of History, Classics & Archaeology)
This project started about 2-3 years back in February 2015. The College of Humanities and Social Sciences wants 100% electronic submission/feedback where “pedagogically appropriate” by 2016/17 academic year. Although I’m saying electronic submission/feedback the in-between marking part hasn’t been prescribed. The project board for this work includes myself, Robert and many others any of whom you are welcome to contact with any questions.
So, why do this? Well there is a lot of student demand for various reasons – legibility of comments; printing costs; enabling remote submission. For staff the benefits are ore debatable but they can include (as also reported by Jisc) increased efficiency, and convenience. Benefits for the institution (again as reported by Jisc) include measuring feedback response rates, and efficiencies that free up time for student support.
Now some parts of CHSS are already doing this at the moment. Social and Political Studies are using an in-house system. Law are using Grademark. And other schools have been running pilots, most of them with GradeMark, and these have been mostly successful. But we’ve had lots of interesting conversations around these technologies, around quality of assessment, about health and safety implications of staring at a screen more.
We have been developing a workflow and process for the college but we want this to be flexible to schools’ profiles – so we’ve adopted a modular approach that allows for handling of groups/tutors; declaration of own work; checking for non-submitters; marking sheets and rubrics; moderation, etc. And we are planning for the next year ahead, working closely with the Technology Enhanced Learning group in HSS. We are having some training – for markers it’s a mixture of in-School and is with College input/support; and for administrators by learning technologies in the school or through discussions with IS LTW EDE. To support that process we have screencasts and documentation currently in development. PebblePad isn’t part of this process, but will be.
To build confidence in the system we’re facing some myth busting etc. For instance, anonymity vs pastoral care issues – a receipt dropbox has been created; and we have an agreement with EUSA that we can deanonymise if identification is not provided. And we have also been looking at various other regulations etc. to ensure we are complying and/or interpreting them correctly.
So, those pilots have been running. We’ve found that depending on your preocesses the administration can be complex. Students have voiced concerns around “generic” feedback. Students were anxious – very anxious in some cases. It is much quicker for markers to get started with marking, as soon as the deadline has passed. But there are challenges though – including when networks go down, for instance there was an (unusual) DDOS attack during our pilots that impacted our timeline.
Feedback from students seems relatively good. 14 out of 36 felt quality of marking was better than on paper – but 10 said it was less good. 29 out of 36 said feedback was more legible. 10 felt they had received more feedback than noral, 11 less. 3 out of 36 would rather submit on paper, 31 would would rather submit online. In our first pilot with first year students around 10% didn’t look at feedback for essay, 36% didn’t look at tutorial feedback. In our second pilot about 10% didn’t look at either assignments submissions.
Markers reported finding the electronic marking easier, but some felt that the need to work on screen was challenging or less pleasant than marking on paper.
Q&A
Q1) The students who commented on less or more feedback than normal – what were they comparing to?
A1) To paper-based marking, which they would have had for other courses. So when we surveyed them they would have had some paper-based and some electronic feedback already.
Q2) A comment about handwriting and typing – I read a paper that said that on average people write around 4 times more words when typing than when hand writing. And in our practice we’ve found that too.
A2) It may also be student perceptions – looks like less but actually quite a lot of work. I was interested in students expectations that 8 days was a long time to turn around feedback.
Q2) I think that students need to understand how much care has been taken, and that that adds to how long these things take.
Q3) You pointed out that people were having some problems and concerns – like health and safety. You are hoping for 100% take up, and also that backdrop of the Turnitin updates… Are there future plans that will help us to move to 100%
A3) The health and safety thing came up again and again… But it’s maybe to do with how we cluster assignments. In terms of Turnitin there are updates but not all of those emerge rather slowly – there is a bit more competition now, and some frustration across the UK, so looking likely that there will be more positive developments.
Q4) It was interesting that idea that you can’t release some feedback until it is all ready… For us in the Business School we ended up releasing feedback when there was a delay.
A4) In our situation we had some marks ready in a few days, others not due for two weeks. A few days would be fair, a few weeks would be problematic. It’s an expectation management issue.
Comment) There is also a risk that is marking is incomplete or partially done it can cause students great distress…
Current assessment challenges, Dr. Neil Lent (Institute for Academic Development)
My focus is on assessment and feedback. Initially the expectation was that I’d be focused on how to do assessment and feedback “better”. And you can do that to an extent but… The main challenge we face is a cultural rather than a technical challenge. And I mean technical in the widest sense – technological, yes, but also technical in terms of process and approach. I also think we are talking about “cultures” rather than “culture” when we think about this.
So, why are we focussing on assessment and feedback? Well we have low NSS scores, low league table position and poor student experience reported around this area. Also issues of (un)timely feedback, low utility, and the idea that we are a research-led university and the balance of that and learning and teaching. Some of these areas are more myth than reality. I think as a university we now have an unambiguous focus on teaching and learning but whether that has entirely permeated our organisational culture is perhaps arguable. When you have competing time demands it is hard to do things properly, and the space to actually design better assessment and feedback.
So how do we handle this? Well is we look at the “Implementation Staircase” (Reynolds and Saunders 1987) we can see that it comes from senior management, then to colleges, to schools, to programmes, to courses, to students. Now you could go down that staircase or you can go back up… And that requires us to think about our relationships with students. Is this model dialogic? Maybe we need another model?
Activity theory (Engestrom 1999) is a model for a group like a programme team, or course cohort, etc. So we have a subject here – it’s all about the individual in the context of an object, the community, mediating tool, rules and conventions, division of labour. This is a classic activity theory idea, with modern cultural aspects included. So for us the subject might be the marker, the object the assignment, the mediating tool something like the technological tools or processes, rules and conventions may include the commitment to return marks within 2 weeks, division of labour could include colleagues and sharing of marking, community could be students. It’s just a way to conceptualise this stuff.
A cultural resolution would see culture as practice and discourse. Review and reflection need to be embedded and internalised way of life. We have multiple stakeholders here – not always the teacher or the marker. And we need a bit of risk taking – but that’s scary when we are thinking about risk taking. That can feel at odds with the need to perform at a high level but risk taking is needed. And we need best practice to share experience in events such as this.
So there are technical things we could do better, do right. But the challenge we face is more of a collective one. We need to create time and space to genuinely reflect on their teaching practice, to interact with that culture. But you don’t change practice overnight. And we have to think about our relationship with our students, and thinking about how we encourage and enable them to be part of the process, and building up their own picture of what good/bad work looks like. And then the subject, object, culture will be closer together. Sometimes real change comes from giving examples of what works, inspiring through those examples etc. Technological tools can make life easier, if you have the time to spend time to understand them and how to make them work for you.
Q&A
Q1) Not sure if it’s a question or comment or thought… But I’m wondering what we take from those NSS scores, and if that’s what we should work to or if we should think about assessment and feedback in a different kind of paradigm.
A1) When we think about processes we can kid ourselves that this is all linear, it’s cause and effect. It isn’t that simple… The other thing about concentrating on giving feedback on time, so they can make use of it. But when it comes to the NSS it commodifies feedback, which challenges the idea of feedback as dialogic. There are cultural challenges for this. And I think that’s where risk, and the potential for interesting surprises come in…
Q2) As a parent of a teenager I now wonder about personal resilience, to be able to look at things differently, especially when they don’t feel confident to move forwards. I feel that for staff and students a problem can arise and they panic, and want things resolved for them. I think we have to move past that by giving staff and students the resilience so that they can cope with change.
A2) My PhD was pretty much on that. I think some of this comes from the idea of relatively safe risk taking… That’s another kind of risk taking. As a sector we have to think that through. Giving marks for everything risks everything not feeling like a safe space.
Q3) Do we not need to make learning the focus.
A3) Schools and universities push that grades, outcomes really matter when actually we would say “no, the learning is what matters”, but that’s hard in the wider context in which the certificate in the hand is valued.
Comment) Maybe we need that distinction that Simon Riley talked about at this year’s eLearning@ed conference, of distinguishing between the task and the assignment. So you can fail the task but succeed that assignment (in that case referring to SLICCs and the idea that the task is the experience, the assignment is writing about it whether it went well or poorly).
Not captured in full here: a discussion around the nature of electronic submission, and students concern about failing at submitting their assignments or proof of learning… 
Assessment Literacy: technology as facilitator, Prof. Susan Rhind (Assistant Principal Assessment and Feedback)
Open Discussion on technology in Assessment and Feedback          

Share/Bookmark

Principal’s Teaching Award Scheme Forum 2016 – Liveblog

Today I’m at the University of Edinburgh Principal’s Teaching Award Scheme Forum 2016: Rethinking Learning and Teaching Together, an event that brings together teaching staff, learning technologists and education researchers to share experience and be inspired to try new things and to embed best practice in their teaching activities.

I’m here partly as my colleague Louise Connelly (Vet School, formerly of IAD) will be presenting our PTAS-funded Managing Your Digital Footprint project this afternoon. We’ll be reporting back on the research, on the campaign, and on upcoming Digital Foorprints work including our forthcoming Digital Footprint MOOC (more information to follow) and our recently funded (again by PTAS) project: “A Live Pulse: YikYak for Understanding Teaching, Learning and Assessment at Edinburgh.

As usual, this is a liveblog so corrections, comments, etc. welcome. 

Velda McCune, Deputy Director of the IAD who heads up the learning and teaching team, is introducing today:

Welcome, it’s great to see you all here today. Many of you will already know about the Principal’s Teaching Award Scheme. We have funding of around £100k from the Development fund every year, since 2007, in order to look at teaching and learning – changing behaviours, understanding how students learn, investigating new education tools and technologies. We are very lucky to have this funding available. We have had over 300 members of staff involved and, increasingly, we have students as partners in PTAS projects. If you haven’t already put a bid in we have rounds coming up in September and March. And we try to encourage people, and will give you feedback and support and you can resubmit after that too. We also have small PTAS grants as well for those who haven’t applied before and want to try it out.

I am very excited to welcome our opening keynote, Paul Ashwin of Lancaster University, to kick off what I think will be a really interesting day!

Why would going to university change anyone? The challenges of capturing the transformative power of undergraduate degrees in comparisons of quality  – Professor Paul Ashwin

What I’m going to talk about is this idea of undergraduate degrees being transformative, and how as we move towards greater analytics, how we might measure that. And whilst metrics are flawed, we can’t just ignore these. This presentation is heavily informed by Lee Schumers work on Pedagogical Content Knowledge, which always sees teaching in context, and in the context of particular students and settings.

People often talk about the transformative nature of what their students experience. David Watson was, for a long time, the President for the Society of Higher Education (?) and in his presidential lectures he would talk about the need to be as hard on ourselves as we would be on others, on policy makers, on decision makers… He said that if we are talking about education as educational, we have to ask ourselves how and why this transformation takes place; whether it is a planned transformation; whether higher education is a nesseccary and/or sufficient condition for such transformations; whether all forms of higher education result in this transformation. We all think of transformation as important… But I haven’t really evidenced that view…

The Yerevan Communique: May 2015 talks about wanting to achieve, by 2020, a European Higher Education area where there are common goals, where there is automatic recognition of qualifictions and students and graduates can move easily through – what I would characterise is where Bologna begins. The Communique talks about higher education contributing effectively to build inclusive societies, found on democratic values and human rights where educational opportunities are part of European Citizenship. And ending in a statement that should be a “wow!” moment, valuing teaching and learning. But for me there is a tension: the comparability of undergraduate degrees is in conflict with the idea of transformational potential of undergraduate degrees…

Now, critique is too easy, we have to suggest alternative ways to approach these things. We need to suggest alternatives, to explain the importance of transformation – if that’s what we value – and I’ll be talking a bit about what I think is important.

Working with colleagues at Bath and Nottingham I have been working on a project, the Pedagogic Quality and Inequality Project, looking at Sociology students and the idea of transformation at 2 top ranked (for sociology) and 2 bottom ranked (for sociology) universities and gathered data and information on the students experience and change. We found that league tables told you nothing about the actual quality of experience. We found that the transformational nature of undergraduate degrees lies in changes in students sense of self through their engagement with discplinary knowledge. Students relating their personal projects to their disciplines and the world and seeing themselves implicated in knowledge. But it doesn’t always happen – it requires students to be intellectually engaged with their courses to be transformed by it.

To quote a student: “There is no destination with this discipline… There is always something further and there is no point where you can stop and say “I understaood, I am a sociologist”… The thing is sociology makes you aware of every decision you make: how that would impact on my life and everything else…” And we found the students all reflecting that this idea of transformation was complex – there were gains but also losses. Now you could say that this is just the nature of sociology…

We looked at a range of disciplines, studies of them, and also how we would define that in several ways: the least inclusive account; the “watershed” account – the institutional type of view; and the most inclusive account. Mathematics has the most rich studies in this area (Wood et al 2012) where the least inclusive account is “Numbers”, watershed is “Models”, most inclusive is “approach to life”. Similarly Accountancy moves from routine work to moral work; Law from content to extension of self; Music from instrument to communicating; Geograpy is from general world to interactions; Geoscience is from composition of earth – the earth, to relations earth and society. Clearly these are not all the same direction, but they are accents and flavours of the same time. We are going to do a comparison next year on chemistry and chemical engineering, in the UK and South Africa, and actually this work points at what is particular to Higher Education being about engaging with a system of knowledge. Now, my colleague Monica McLean would ask why that’s limited to Higher Education, couldn’t it apply to all education? And that’s valid but I’m going to ignore it just for now!

Another students comments on transformation of all types, for example from wearing a tracksuit to lectures, to not beginning to present themselves this way. Now that has nothing to do with the curriculum, this is about other areas of life. This student almost dropped out but the Afro Carribean society supported and enabled her to continue and progress through her degree. I have worked in HE and FE and the way students talk about that transformation is pretty similar.

So, why would going to university change anyone? It’s about exposure to a system of knowledge changing your view of self, and of the world. Many years ago an academic asked what the point of going to university was, given that much information they learn will be out of date. And the counter argument there is that engagement with seeing different perspectives, to see the world as a sociologist, to see the world as a geographer, etc.

So, to come back to this tension around the comparability of undergraduate degrees, and the transformational potential of undergraduate degrees. If we are about transformation, how do we measure it? What are the metrics for this? I’m not suggesting those will particularly be helpful… But we can’t leave metrics to what is easy to gather, we have to also look at what is important.

So if we think of the first area of compatibility we tend to use rankings. National and international higher education rankings are a dominant way of comparing institutions’ contributions to student success. All universities have a set of figures that do them well. They have huge power as they travel across a number of contexts and audiences – vice chancellors, students, departmental staff. It moves context, it’s portable and durable. It’s nonsense but the strength of these metrics is hard to combat. They tend to involved unrelated and incomparable measures. Their stability reinforces privilege – higher status institutions tend to enrol a much greated proportion of privileged students. You can have some unexpected outcomes but you have to have Oxford, Cambridge, Edinburgh, UCL, Imperial all near the top then your league table is rubbish… Because we already know they are the good universities… Or at least those rankings reinforce the privilege that already exists, the expectations that are set. They tell us nothing about transformation of students. But are skillful performances shaped by generic skills or students understanding of a particular task and their interactions with other people and things?

Now the OECD has put together a ranking concept on graduate outcomes, the AHELO, which uses tests for e.g. physics and engineering – not surprising choices as they have quite international consistency, they are measurable. And they then look at generic tests – e.g a deformed fish is found in a lake, using various press releases and science reports write a memo for policy makers. Is that generic? In what way? Students doing these tests are volunteers, which may not be at all representative. Are the skills generic? Education is about applying a way of thinking in an unstructured space, in a space without context. Now, the students are given context in these texts so it’s not a generic test. But we must be careful about what we measure as what we measure can become an index of quality or success, whether or not that is actually what we’d want to mark up as success. We have strategic students who want to know what counts… And that’s ok as long as the assessment is appropriately designed and set up… The same is true of measures of success and metrics of quality and teaching and learning. That is why I am concerned by AHELO but it keeps coming back again…

Now, I have no issue with the legitimate need for comparison, but I also have a need to understand what comparisons represent, how they distort. Are there ways to take account of students’ transformation in higher education?

I’ve been working, with Rachel Sweetman at University of Oslo, on some key characteristics of valid metrics of teaching quality. For us reliability is much much more important than availability. So, we need ways to assess teaching quality that:

  • are measures of the quality of teaching offered by institutions rather than measures of institutional prestige (e.g. entry grades)
  • require improvements in teaching practices in order to improve performance on the measures
  • as a whole form a coherent set of metrics rather than a set of disparate measures
  • are based on established research evidence about high quality teaching and learning in higher education
  • reflect the purposes of higher education.

We have to be very aware of Goodhearts’ rule that we must be wary of any measure that becomes a performance indicator.

I am not someone with a big issue with the National Student Survey – it is grounded in the right things but the issue is that it is run each year, and the data is used in unhelpful distorted ways – rather than acknowledging and working on feedback it is distorting. Universities feel the need to label engagement as “feedback moments” as they assume a less good score means students just don’t understand when they have that feedback moment.

Now, in England we have the prospect of the Teaching Excellence Framework English White Paper and Technical Consultation. I don’t think it’s that bad as a prospect. It will include students views of teaching, assessment and academic support from the National Student Survey, non completion rates, measures over three years etc. It’s not bad. Some of these measures are about quality, and there is some coherence. But this work is not based on established research evidence… There was great work here at Edinburgh on students learning experiences in UK HE, none of that work is reflected in TEF. If you were being cynical you could think they have looked at available evidence and just selected the more robust metrics.

My big issue with Year 2 TEF metrics are how and why these metrics have been selected. You need a proper consultation on measures, rather than using the White Paper and Technical Consultation to do that. The Office for National Statistics looked at measures and found them robust but noted that the differences between institutions scores on the selected metrics tend to be small and not significant. Not robust enough to inform future work according to the ONS. It seems likely that peer review will end up being how we differentiate between institution.

And there are real issues with TEF Future Metrics… This comes from a place of technical optimism that if you just had the right measures you’d know… This measure ties learner information to tax records for “Longitudinal Education Outcomes data set” and “teaching intensity”. Teaching intensity is essentially contact hours… that’s game-able… And how on earth is that about transformation, it’s not a useful measure of that. Unused office hours aren’t useful, optional seminars aren’t useful…  Keith Chigwell told me about a lecturer he knew who lectured a subject, each week fewer and fewer students came along. The last three lectures had no students there… He still gave them… That’s contact hours that count on paper but isn’t useful. That sort of measure seems to come more from ministerial dinner parties than from evidence.

But there are things that do matter… There is no mechanism outlines for a sector-wide discussion of the development of future metrics. What about expert teaching? What about students relations to knowledge? What about the first year experience – we know that that is crucial for student outcomes? Now the measures may not be easy, but they matter. And what we also see is the Learning Gains project, but they decided to work generically, but that also means you don’t understand students particular engagement with knowledge and engagement. In generic tests the description of what you can do ends up more important than what you actually do. You are asking for claims for what they can do, rather than performing those things. You can see why it is attractive, but it’s meaningless, it’s not a good measure of what Higher Education can do.

So, to finish, I’ve tried to put teaching at the centre of what we do. Teaching is a local achievement – it always shifts according to who the students are , what the setting is, and what the knowledge is. But that also always makes it hard to capture and measure. So what you probably need is a lot of different imperfect measures that can be compared and understood as a whole. However, if we don’t try we allow distorting measures, which reinforce inequalities, to dominate. Sometimes the only thing worse than not being listened to by policy makers, is being listened to them. That’s when we see a Frankenstein’s Monster emerge, and that’s why we need to recognise the issues, to ensure we are part of the debate. If we don’t try to develop alternative measures we leave it open to others to define.

Q&A

Q1) I thought that was really interesting. In your discussion of transformation of undergraduate students I was wondering how that relates to less traditional students, particularly mature students, even those who’ve taken a year out, where those transitions into adulthood are going to be in a different place and perhaps where critical thinking etc. skills may be more developed/different.

A1) One of the studies I talked about was London Metropolitan University has a large percentage of mature students… And actually there the interactions with knowledge really did prove transformative… Often students lived at home with family whether young or mature students. That transformation was very high. And it was unrelated to achievements. So some came in who had quite profound challenges and they had transformation there. But you have to be really careful about not suggesting different measures for different students… That’s dangerous… But that transformation was there. There is lots of research that’s out there… But how do we transform that into something that has purchase… recognising there will be flaws and compromises, but ensuring that voice in the debate. That it isn’t politicians owning that debate, that transformations of students and the real meaning of education is part of that.

Q2) I found the idea of transformation that you started with really interesting. I work in African studies and we work a lot on colonial issues, and of the need to transform academia to be more representative. And I was concerned about the idea of transformation as a colonial type issue, of being like us, of dressing like that… As much as we want to challenge students we also need to take on and be aware of the biases inherent in our own ways of doing things as British or Global academics.

A2) I think that’s a really important question. My position is that students come into Higher Education for something. Students in South Africa – and I have several projects there – who have nowhere to live, have very little, who come into Higher Education to gain powerful knowledge. If we don’t have access to a body of knowledge, that we can help students gain access to and to gain further knowledge, then why are we there? Why would students waste time talking to me if I don’t have knowledge. The world exceeds our ability to know it, we have to simplify the world. What we offer undergraduates is powerful simplifications, to enable them to do things. That’s why they come to us and why they see value. They bring their own biographies, contexts, settings. The project I talked about is based in the work of Basil Bernstein who argues that the knowledge we produce in primary research… But when we design curriculum it isn’t that – we engage with colleagues, with peers, with industry… It is transformed, changed… And students also transform that knowledge, they relate it to their situation, to their own work. But we are only a valid part of that process if we have something to offer. And for us I would argue it’s the access to body of knowledge. I think if we only offer process, we are empty.

Q3) You talked about learning analytics, and the issues of AHELO, and the idea of if you see the analytics, you understand it all… And that concept not being true. But I would argue that when we look at teaching quality, and a focus on content and content giving, that positions us as gatekeepers and that is problematic.

A3) I don’t see knowledge as content. It is about ways of thinking… But it always has an object. One of the issues with the debate on teaching and learning in higher education is the loss of the idea of content and context. You don’t foreground the content, but you have to remember it is there, it is the vehicle through which students gain access to powerful ways of thinking.

Q4) I really enjoyed that and I think you may have answered my question.. But coming back to metrics you’ve very much stayed in the discipline-based silos and I just wondered how we can support students to move beyond those silos, how we measure that, and how to make that work.

A4) I’m more course than discipline focused. With the first year of TEF the idea of assessing quality across a whole institution is very problematic, it’s programme level we need to look at. inter-professional, interdisciplinary work is key… But one of the issues here is that it can be implied that that gives you more… I would argue that that gives you differently… It’s another new way of seeing things. But I am nervous of institutions, funders etc. who want to see interdisciplinary work as key. Sometimes it is the right approach, but it depends on the problem at hand. All approaches are limited and flawed, we need to find the one that works for a given context. So, I sort of agree but worry about the evangelical position that can be taken on interdisciplinary work which is often actually multidisciplinary in nature – working with others not genuinely working in an interdisciplinary way.

Q5) I think to date we focus on objective academic ideas of what is needed, without asking students what they need. You have also focused on the undergraduate sector, but how applicable to the post graduate sector?

A5) I would entirely agree with your comment. That’s why pedagogic content matters so much. You have to understand your students first, as well as then also understanding this body of knowledge. It isn’t about being student-centered but understanding students and context and that body of knowledge. In terms of your question I think there is a lot of applicability for PGT. For PhD students things are very different – you don’t have a body of knowledge to share in the same way, that is much more about process. Our department is all PhD only and there process is central. That process is quite different at that level… It’s about contributing in an original way to that body of knowledge as its core purpose. That doesn’t mean students at other levels can’t contribute, it just isn’t the core purpose in the same way.

And with that we are moving to coffee… The rest of the programme for the day is shown below, updates to follow all day. 

11.50-12.35 Parallel Sessions from PTAS projects

12.35 – 13.35 Lunch and informal discussion

13:35 -14.20 Parallel Sessions from PTAS projects

14.20-15.00 Refreshments and networking

15.00-16.00 Closing Keynote : Helen Walker, GreyBox Consulting and Bright Tribe Trust

16:00-16.30 Feedback and depart

Share/Bookmark