Sian B’s Inaugeral lecture: “The Trouble with Digital Education.” LiveBlog

Today I have been liveblogging Professor Sian Bayne’s Inaugural Lecture “The Trouble with Digital Education”. There will be some links and corrections to follow, but hopefully this gives a sense of her talk…

Prof. Jeff Hayward is introducing Sian, who he’s “delighted to say kind things about”. Sian is known nationally and internationally as a deep thinker on digital education. Sian undertook her undergraduate degree here at Univeristy of Edinburgh but went on to undertake her PhD at Queen Margarets. That work on culture in cyber space has informed her work since. She established the MSc in eLearning, now the MSc in Digital Education which is one of the most renowned online MSc programmes.

Sian has worked with museums and galleries, she has held many different research grants, keynotes, etc. Her Inaugera Lecture, the Trouble with Digital Education, with live musical accompaniment, is sure to be stimulating. And she welcomes your questions.

So, over to Sian:

The music you heard when you came in was specially composed for this inaugeral lecture (by Stephen ?), and it is quite dark as I think that he was picking up on the “troubled” part of my description. So, I want to start with lots of thank yous to all of you to my family, my wonderful colleagues on the MSc in Digital Education, my extended colleagues from the programme. And I’ve been very fortunate with my mentors here and before, for helping an academic career feel an actual feasible proposition. Thank you to all of you [Not all names captured here, hopefully Sian will comment with the full roll call].

So… from colleagues to [pictures of] cactuses. And most of what I want to say is to talk about what it is to think seriously about teaching in digital education environment. Digital education has always been a field surrounded by various promoses and threats, from ideas of efficiency, increasing scale and reach, increasing relevance to upcoming generations, personalisation… And then on the other side the threats or the things which appear threatened by digital education as a concept: co-presence, embodiment, community, surveillance, de-professionalisation… There is a case to be made for all of these but they are driven by a very specific idea of the relationship between the social and the material.

I always go back to Hamilton and Friesen 2013 and their two ideas of Instrumentalism – technologies as neutral means employed in our needs; and essentialism – that technology drives social practice and change. These positions suggest a strict separation of the social and the material, of technology and people. But it’s not as if we don’t have other positions here. We have science and technology studies, post modernist criticism, etc. to draw upon, we have ways to think about the relationships and dependence of people and technology. But these ideas haven’t really trickled down to the world of education.

Hamiton and Friesen (2013) talk of educational technologies as contingent forces and (Fenwick and ?)

We have this sense of promises – speeding things up, efficiencies, etc. Threats suggesting that we reduce human to human contact… And with both in mind I want to focus on the idea of human automation… And I want to kick off this section with a clip from the Matrix [the helicopter request scene]. SO we see this narrative in popular and educational culture about technology speeding up education, making us efficient, terminology of “technology enhanced learning” (see Laurillard 2011), a recent Horizon 2020 call also refers to educational technologies for “more effective and efficient human learning”. But the drivers aren’t always economic, it is about improving learning but in a very particular type of way, with a particular understanding.. Sigges(?) idea of everyone having an “Aristotle on our shoulders”.

Arthur C Clarke, in “Electronic Tutors” talks about technology in education, that it is nothing new… and that the ideal would be a teacher at one end of a log, a student at the other end… and that the world is “not only woefully short of teachers it’s running out of logs”. But that suggests that it is like running out of oil, but we have the means to change that,

What Clarke really didn’t get right was the idea of electronic tutors happening any time now… That “no social or political” etc. system could withstand technology whose time has come. But actually Underwood and Luckin (2011) found that there is still a real absence in take up because there is also a lack of  understanding of what technology can do.

So, where is the criticism here? Well Neil Selwyn (e.g. 2014) challenges the neo-liberal efficiency take on digital education. Clegg (2003) talks about a need to refocus away from the functionality of e-learning environments back to the core relations between students and teachers and the condition in which they find themselves. So we get to Feenberg (2003) and the idea of the mobilisation of the human touch… But maybe we don’t have to choose between the mateerial and the social. And this is the point Andrew Pickering (2005) makes here… seeing the non human and human at once (“we should ride the Ostrich with more conviction!” [see image!]). Going back to Clarke for a moment he had this idea that a non human teacher would be better than any human teacher [much laughter], and I think we’ve certainly moved away from that.

So, now to a diversion. And Twitterbots. These are computer programms that tweet of their own accord (Dubbin 2013) and it is estimated that 8.5% of all active users that may be bots in this way (Twitter 2014). Some generate spam, but not all of them… So I want to talk about Twitterbots as a cultural form. So I am going to take you on a quick tour of Twitterbots…

For those not familiar with Twitterbots we have @DearAssistant is a bot that just answers questions… You tweet a question, the bot tweets a response. This bot interrogates Wolfram Alpha. Another here is @earthquakesLA, there are many variants of this sort of bot… It draws data from the US Geological Survey and when there are alerts for earth quakes, it shares an alert. Those are quite earnest ones… This is more playful.

@StealthMountain is a bot that trawls Twitter looking for people spelling “sneak peak” wrongly, correcting them that it’s “sneak peek” – there’s a whole paper on this. Another here is @oliviataters who tweets as a teenage girl… She searches Twitter for “literally” and “embaressed” and other such teenage girl phrases and remixes them into new tweets, often to

@theDesireBoy (by Felix Jung) are all tweets about “I just want” tweets, often very poigniant tweets here… And there is also the @PottyBot – which just swears whenever The Archers comes on Radio 4. Which is a lot. Someone replied to that bot with a version of Radio 4 that is slowed just enough to evade the Archers!

So you can see there is a huge variety here. And as Mark Sample (2014) argues you can also have a “bot of conviction” tackling social issues. He has set up @NRA_Tally which pulls together information on shootings of more than 4 victims, and those events are tweeted with NRA headlines to make a potent point.

Kazemi (2013) writes about @TwoHeadlines, which is about generating jokes in the future by stitching together two Google News story.

So, a serious point about Twitterbots is that they foreground the influence of automation on modern life, and dymystify the process (Dubbin 2013).

When we talk about ideas in the digital education we always get the “but what the implications for practice?” and whilst that can be mildly deflating at first, actually that is about thinking more widely about what these ideas mean… And the expansive thinking of this team. And that is such a positive critique to have there.

With that I want to talk about the Teacherbot, and firstly to thank all involved in that initiative. The context here was the E-learning and Digital Cultures MOOC, which was on it’s third run and we wanted to do something different. These are huge courses, this one had around 12000 students. They are very active students, very receptive, lots of tweeting and discussion. And so Hadi(?) developed our teacherbot for us. We as a teaching team generated the data to feed it. We had a simple web form in which each of us could develop rules. One was developed by Christine and Hamish (or the Christine/Hamish Assemblage) and they would enter terms and synonyms, dates and deadlines, and instructions of what the bot should tweet… How this worked was that any tweet with #edcmooc would trigger the bot to go looking for a rule, if it found a rule then it would go ahead and tweet the appropriate text.

So, when we designed our rules – which is actually really quite difficult to do well… Effectively we divided it up. Jen and I did content tweets, Christine and Hamish did process orientated rules, Jeremy did more socially orientated tweets… And they had interesting responses… The bot didn’t always get it right of course. But we did want the bot to be both playful and genuinely helpful – so we had it tweet extracts from readings… And we found students replying and then commenting on why on earth they were replying. And in one case Teacherbot getting into a loop… but bringing it back at the end… We also had some loosely pastoral exchanges too… Fabienne tweets about a connection between a fim and a text, teacherbot jumps in misunderstanding and worrying that she feels lonely.

We didn’t pretend that teacherbot was real… But we wanted to sneak out the information… But when first switched on… it tweeted hundreds of times a quote from Melissa Terras’ which looked a tad threatening at that scale.  But you can see the teacherbot was really central to the #edcmooc community. And we had a great student blog post talking about teacherbot as “ambush teaching”! But was our teacherbot a “bot of conviction”, a bot with purpose (Sample 2014), he talks about bots of conviction needing to be:

– Topical – morning news not lost love or existential anguish – not sure about twitterbot

– Data Based – which ours was

– Cumulative – the aggregate of those tweets gain power

– oppositional – takes a stand – teacherbot kind of did…

It took a stand about what it was that we think we want in digital education. It didn’t work on a deficit model of a lack of teacher time or compatibility, or digital education as performance or instrument. But with teacherbot we were working with excess. We weren’t working with supercessional model, it was about entanglement and how the teacher could achieve something via that entanglement. And it played across the embrace/resistance technology in education binary, blurring those lines. And it moved form what works?, to what do we want? So I think it made the move towards a bot of conviction…

Now when I wrote this abstract I could have talked about any number of the Digital Education projects, particularly the MSc in Digital Education programme… But I want to leave you with our Manifesto for Digital Education, and commit myself for the next 10 or 20 years of my career to thinking about what might be a “A digital education of conviction”.

Q&A

Q1 from Sian’s daughter Ula: You know the woman with the helicopter, why can’t we do that…

A1: I’ll tell you later… but I know it’s a cop out..

Q2, from Jen from Twitter: Some of these bots seem to have more empathy than human teachers… and maybe you can say something about empathy and conviction?

A2: perhaps if the bots come from a place of empathy that is enough… But we can think critically about whether empathy has to only be a human trait…

Q3: The chief engineer at Google calculates that in the 2040′s computers will be self aware, could be seen as an essentialist view… He doesn’t know how they will respond but he thinks they will demand respect and rights… So what happens when we have self aware computers?

A3: I think that digital education has prepared itself for several decades in popular culture how we might engage with that idea… Not that big a jump to have respect for an artificial intelligence, doesn’t seem particularly problematic stance to take.

Q4: I was wondering if you could say more about presence… For my students Google Hangouts without video can feel like “warm bodies” but feedback may not… The bot seems to feel like warm bodies

A4: Yeah, this idea of whether actual presence matters, or just the perception of presence matters, is frankly all up for grabs

Q5: There seems to be an issue of currency and attention… Now in a world where there is a lot of currency but perhaps a scarcity of attention?

A5: We haven’t come close to cracking that yet… That idea of how we manage attention online. A real challenge for scholarship. Totally dependent on the online world for our work and research but also totally open to distraction in those spaces – and that might be email as much as Twitter etc…

Jeff: As always Sian has provided some extremely interesting food for thought, some of which reminded me of initial conversations about teacherbot… when we talked about whether it would work and, in a sense, that it didn’t matter as long as we learned something… and it seems that we’ve definitely learned something from that.

[me] And with that we end an excellent Inaugeral Lecture with much applause and thanks to Sian.

Share/Bookmark

This entry was posted in Uncategorized by Nicola Osborne. Bookmark the permalink.

About Nicola Osborne

I am Digital Education Manager and Service Manager at EDINA, a role I share with my colleague Lorna Campbell. I was previously Social Media Officer for EDINA working across all projects and services. I am interested in the opportunities within teaching and learning for film, video, sound and all forms of multimedia, as well as social media, crowdsourcing and related new technologies.

Comments are closed.