Repository Fringe 2013: Reports from the Blogosphere

As a Friday treat we thought we would share with you all of the reports on Repository Fringe 2013 that have been appearing across the blogosphere. These are brilliant personal records of the event and the workshops. They include some fantastic reflections, links to additional materials, and an opportunity to experience the event from someone else’s perspective.

We’ve decided to order these by category so take a wee browse and enjoy:

The Workshops
The Round Table Sessions

Planes, Trains and Automobiles

Developer Challenge

Pecha Kuchas and Presentations

Reflections and Feedback on Repository Fringe 2013

  • Gaz Johnson has provided a useful summary of Repository Fringe 2013, including some really useful feedback and suggestions for future events.
  • Richard Wincewicz has blogged about his first experience of the event – and of taking part in the Developer Challenge – in his guest post: My first Repository Fringe.
  • Lynette Summers of Cardiff Metropolitan  University has written a great summary of the event for the Wales Higher Education Libraries Forum (WHELF) blog: Repository Fringe 2013.
  • Chris Awre has provided his reflections across the whole event on the Hull Information Management Blog: Edinburgh? Fringe? Must be a repository conference.

Still to come…  

If you have written a post on RepoFringe we would be more than happy to add it here and to our forthcoming summary post. Please just leave us a comment here or email


Taming the beast – working with Hydra together

Another guest blog post for you from Chris Awre, University of Hull:

It seems at first slightly self-rewarding to be able to use a conference blog to highlight further some of the points I made in my conference presentation.  I’m sure it is.  Nevertheless, I’d like to do so in the context of the workshop and roundtable I led on ‘Getting to the Repository of the Future‘ and the conference as a whole, which I hope places any platform plug in its proper place.

To re-cap, Hydra is a project initiated in 2008 to create a flexible framework that could be applied to a variety of repository needs and solutions.  Hydra today is a repository solution that can be implemented (based around Fedora), a technical framework that can be adapted to meet specific local needs, and, of equal if not greater importance, a community of partners and users who wish to work together to create the full range of solutions that may be required.  There are, as of August 2013, 19 current partners, with a number of others considering joining: in Europe the University of Hull is joined by LSE and the Royal Library of Denmark as partners, with use also taking place at Glasgow Caledonian, Oxford, Trinity College Dublin (for the Digital Repository of Ireland), and the Theatre Museum of Barcelona.  Not a large number as it stands, perhaps, but each exploiting what Hydra can provide to meet varied needs, sharing experiences and ideas, and demonstrating how a flexible platform can be adapted.

I’d like to pick out three main themes:

  • What is Hydra?

In the Repository of the Future workshop one of the main points raised was about clarifying the purpose of a repository.  This allows it to be situated in a broader institutional context without necessarily competing with other systems.  In doing so, it suggests that repositories should focus their activity rather than suffer mission creep and dilute their core offering.  I was conscious of this as I described Hydra in my presentation as being able to manage any digital content the University of Hull wished us to.  Contradiction?  On one level, yes, and I am all too well aware of the need to clarify what the repository is actually doing so as to strengthen the message we give out: I am defining our repository service more succinctly for this reason, for the University and for library staff.  But that doesn’t mean the repository infrastructure shouldn’t be capable of managing different types of content so that when a use case arises the repository can offer that capability to address it.  Clarifying our repository’s purpose is thus emphasising that it is a service capable of managing structured digital content of all sorts, with foci around specific known collections.  Other Hydra partners have focused their developments on more specific use cases (e.g., Avalon for multimedia, or Sufia for self-deposit), albeit recognising that Hydra provides them with the wherewithal to expand this if they need to.  And if we can share the capability between us as part of a community, then we can expand functionality and purpose as we need to.

  • Repository as infrastructure

I mentioned repository infrastructure in the last paragraph.  A challenge I threw out to the workshop, and throw out here, is to go into an institutional IT department and ask if the repository is infrastructure or an application.  These are treated very differently, with the former often given more weight than the latter I would argue.  I would also suggest (and I’d welcome feedback on this) that repositories are more considered an application.  However, if we are to take the management of digital collections seriously then they need to be treated as infrastructure, and the purpose of a repository built up from there.  A lot of the thinking behind Hydra is based on the repository underpinning other services through making content available in flexible ways to allow it to be used as appropriate.  Someone at the workshop referred to a repository as a ‘lake of content’.  Whatever the scope and purpose of a repository, managing that lake is an infrastructural role akin to managing the water supply network as opposed to focusing on the bathroom fittings.

  • Technical support

Key to Hydra’s evolution has been the dedication of many software developers to contribute from the various institutions they are employed by – a classic open source model in many ways.  I was asked following my presentation how Hydra had been successful in getting such commitment.  One part of the answer was the one I gave, that the software choice, Ruby on Rails, had proved very amenable to agile development and frequent input, and that the developers liked using it.  Another is the further point I made, that the US libraries can sustain such projects as Hydra because they recognise the value of technology to their libraries, and are prepared in many cases to back that up with specific staffing resource.  Certainly this is most evident at the larger institutions, but it goes beyond this as well: not for nothing is there a technically-oriented national Digital Libraries Federation through which digital library initiatives can be showcased and shared, and the developer-focused Code4Lib community.   Developer staffing within libraries in the UK is there in some cases, but is not widespread.  If we consider repositories as being part of a library’s future, do we need the technical commitment to ensure they can do the job they need to?  At Hull we rely on IT department staffing, as many do.  Perfectly adequate for managing an application, but is it an indication of real commitment?  Where it is not feasible to have local technical staff, is there a model that supports dedicated developer input as part of a collaboration?  Of course, even with dedicated technical resource it may not be feasible to do everything alone – hence the Hydra model of doing things together that partners the size of Stanford and Virginia continue to value.

<setting out stall>

Of course, at the University of Hull we view Hydra as being a route down which we can get to the repository of the future.  It provides us with the infrastructure we need to establish our repository’s purpose, but adapt and grow from this as the University requires.  It also allows us to say ‘yes’ when we are asked about the ability to manage different content, even if there may be associated staffing resource issues that need resolving.  We think this will stand us in good stead moving forward.  Hydra won’t necessarily be right for others as a technology, but I hope that the community aspects of working together technically can be adapted to suit regardless of technical platform.  If interested in pursuing more about Hydra as a technical solution, though, let me know ;-)

</setting out stall>


My Summary of Thursday 1st August

Today we bring you a rather belated live blog (entirely your Repository Fringe blog editor’s fault) from guest blogger Valerie McCutcheon, Research Information Manager at the University.  She is part of a team that provides support for managing a wide range activity including datasets, publications, and other research outputs. Valerie blogs regularly at Cerif 4 Datasets.

Here is my brief summary of Repository Fringe Thursday 1st August.

Met some old friends and some great new ones and look forward to exploring several areas further including:

  • Open data – Are there good case studies out there from UK higher education institute based researchers that might illustrate to our researchers some potential benefits to investing time in making data openly available?  Had a good chat with Jacqui Taylor over lunch and hope we can follow this up.
  • Open Journal Systems, OpenAIRE compliance, and Repository Junction Broker – all sound like things we ought to progress and are on our list – need to see if we can find more time to investigate
  • Concerns over quality of data put out there e.g. by Gateway to Research – I will follow up with Chris
  • I wondered if the ResourceSync might be a useful option or at least concept to emulate to address synchronisation of data with RCUK outputs systems – I will follow this up with Stuart Lewis

Overall the day exceeded my expectations and I got a lot out of it – thank you!



My First Repository Fringe

Today we bring you a guest post reflecting on the experience of being a first timer at Repository Fringe. Our blogger is Richard – and we’ll let him introduce himself…

My name is Richard Wincewicz and I work at EDINA as a software engineer. My background is synthetic chemistry but three years ago I got involved in the Islandora project ( based on the east coast of Canada. A year ago I moved back to the UK and started my current position at EDINA.

First impressions

This was my first Repository Fringe and I was surprised at how comfortable it felt. I’ve not been in the repository field for that long but I’ve got to know some people and this was a great opportunity to catch up with them. Being relatively new also meant that there were plenty of people there that I’d not met before and so I spent a lot of time making new connections with people. The sessions were fairly informal and plenty of time was allowed between them to let people engage and share their thoughts and ideas. Even so there were occasions where Nicola had to heard a number of stragglers (me included) into the next session because the impromptu discussions were so engrossing that we’d lost track of time.

Developer challenge

When I signed up I’d indicated that I wanted to take part in the developer challenge. At first I looked at the topic of ‘preservation’ and thought, “That’s a broad topic, I’m sure I can come up with a useful idea over the next couple of weeks.” On my way home on the evening before my entry had to be submitted I finally came up with an idea that was potentially useful and feasible given that I only had a night to get it done (as well as sleep and eat).

Over the previous couple of days I had heard a few people mention the lack of metadata provided when given content to store, alongside the lack of willingness of providers to change. My idea was to create a web service that would take any file that you wanted to throw at it and provide as much metadata as it could glean from the file in a useable form. There are plenty of tools around that will extract the embedded metadata in a file, the Apache Tika project ( being one of the more comprehensive ones, and my application was basically a front end for this.

The added value that I provided was to return the metadata in Dublin Core. This meant that this web service could be integrated into a repository workflow with very little effort. My plan was to expand the number of metadata schemas available to make it easier for the repository to incorporate the output directly but I sadly ran out of time. One thing that became clear while testing my code was that often the quality of the embedded metadata was poor. After discussing my project with Chris Gutteridge I decided that mining the document for relevant information would give richer metadata but require a lot more time to produce anything even remotely functional.

In the end I spent around 3 hours on my entry but I was proud that I had something that not only worked but didn’t fail horribly when I demoed it live to a roomful of people.


I enjoyed my first Repository Fringe immensely. I got a huge amount out of it both in terms of learning and networking. I plan to attend next year and hopefully find a couple more hours to work on my developer challenge entry.


(Open) Heaven is a Place on Earth – or How to Get to Utopia Without Really Trying

Today we bring you a guest post from Gareth J Johnson (@llordllama) on the Open Access and Academia  Round Table led by Gareth and Dominic Tate on Thursday 1st August. Gareth is a former repository manager and is currently working towards a PhD that is examining issues of culture, influence and power related to open scholarship within UK academia at Nottingham Trent University.

In all the hubbub and hullabaloo of the Repository Fringe about wondrous technological solutions and efforts to bring us to the dawn of a new age of openness in scholarship, we thought it would be worth spending sometime asking the question “So just what would the utopian end point of open access within academia be?â€�  It was, I think you can agree a fairly large question to tackle and one that I don’t think we’ll claim we made conclusive headway in during the 90 minutes.  However, as an exercise in attempting to get everyone in the room to step back for a moment from concerns about the REF and having to meet senior institutional management’s expectations, and to consider what the end point of open access would ideally be I think it was a reasonable success.

Participants in the Open Access and Academia Round Table session

Brief introductions from those present revealed a constituency comprising mostly repository workers, with a smattering of more technical staff and a publisher or two; which would likely bias the results of the discussions in a certain direction.  We started with the precept that the current OA situation in academia couldn’t be perfect, given people’sinterest in attending the session and conference.  And at this point asked the first key question:

  • What could or would the utopian open access situation in academia look like?
  • What is involved?
  • Who is involved?

Early suggestions included ensuring that OA was simply built into academics’ natural practice and ensuring that openness in scholarship wasn’t siloed into simply research papers but embraced data, education, sculptures and other expressions of scholarship too.  At this point everyone was broken into small groups to discuss these issues.

Participants in the Open Access and Academia Round Table session

A broad range of ideas came back from the groups, some of which it was noted are potentially mutually interdependent or diametrically opposed.  But as we’d said at the start, this was a utopian view where not everything could or would be achieved.  Aspirations ranged from the holistic to the specific with desires for open licences, no embargo periods through transparency for Gold OA pricing to XML over PDF as the standard format.  Interestingly given the current UK situation and prevalence of Gold OA, there was some considerable desire for transformation of the scholarly dissemination environment too with calls for open peer review or for institutions to take over the management of the same.

Overall thought the discourse from the groups seemed to suggest a sense that OA should become the norm for academia, that it should be so regular and normal as to almost be engaged with without comment.  Embedded and invisible in this way calls for research community engagement would likely find near total compliance.

We asked the groups next to consider which of these activities were in their eyes the most significant, and then to go back and think more about the second of our key questions in relation to it.

  • How could or would this be achieved?
  • What needs to change?
  • What needs to stay the same?
  • What needs to happen next?

For the record none of the groups picked the same key task (interesting…) which meant we had 5 separate areas of OA utopia to be worked on as follows:

  1. Changing the culture
  2. OA achieving invisibility
  3. Institutional management of peer review
  4. Transparent pricing for all OA
  5. Embedding OA in the research process

What follows are some of the main points that came out:


  • Considering the needs of different stakeholders – students and academics – in scholarly discourse and access.
  • Same funding and policies for open access for all disciplines.
  • Students to have a more embedded understanding of where their research information comes from – thus not just library led training session on information skills.
  • Institutions to do their own publication of monographs and journals, thus everyone would have the same level of understanding.
  • OA league tables rather than traditional measures of excellence for recruited new students
  • VCs need to be on board with OA as it should impact on every decision made at the university.
  • Changing internal promotional for academics through greater recognition of OA outputs.
  • HEFCE to fund universities equally based on OA and not funding universities based on traditional institutional outputs.
Participants in the Open Access and Academia Round Table


  • Repository Junction Broker making harvesting easier from a centralised point.
  • Capturing multiple-versions (ala Jack Kérouac ‘s On the Road which existed in multiple forms) as some scholars want to interact and use earlier/later versions of research.
  • Open peer review could support OA.  Noted the Nature experiment where this was tried and hadn’t really been taken up by the academic community.
  • Issues about who does this peer review and if issues of age/generation of academic impacts on this – would Generation Z academics fit more naturally into an open scholarly culture than Gen X and Baby Boomers.
  • Are there penalties for failure to comply with RCUK mandate.


Suggestion from the other groups that future generations might not want to reach the same goals for OA as the current movement members – might this mean a shift in an hitherto unexpected direction?  Should students be consulted about how and where OA should go was another thought.

Managing Peer Review

  • Collaborative approach to be taken within disciplines with champions nominated by institutions to take this forward.
  • Academics and editors need to continue to maintain their involvement in peer review, to retain rigour and quality
  • The way that research outputs are evaluated needs to be revised and changed, seeing that research has an impact beyond academia and that simple citation counts are insufficient judging criteria of excellence alone.  Thus embracing of impact evidence, alt-metrics and a rethink of the whole qualitative vs quantitative value of research outputs is needed.


A discussion point that Social Sciences/Humanities work less with citation counts and this would help them to be viewed in an “as valuable as� science way by senior faculty and external auditors.  There was also a discussion around the time it takes for non-STEM subjects to become recognised as having achieved impact is much longer; although a counterpoint is that some pharmaceutical research only becomes recognised as significant many years after it has been done as well.

Issues around the ability of scholarly dissemination to transformation and evolve through the auspices of OA were examined as well.  In particular a point was raised that methods and routes of communication have evolved considerably in the past couple of decades and yet dissemination of scholarship has not kept pace, a point near to this author’s heart in his own current researches.  That researchers it was suggested still function within a print mentality in a digital world was suggested as being perpetuated by the way impact is calculated currently.

Dominic Tate with participants in the Open Access and Academia Round Table

Transparent Pricing

  • A transparent pricing system for OA is very urgent.  The past is an opaque system whereby the library spent money but academics were unaware of the levels.  Right now is a golden (no pun intended) opportunity to make the cost of scholarly dissemination made transparent, but it seems there continues to be a danger that we will continue to operate in an opaque system.
  • Universities should make public how much funding it has for open access, and exactly what the APCs are for each of the publishers that are available for people to publish with.
  • JISC Collections should do this as well, with their memberships to say “This is exactly how much the average APC will beâ€�.
  • A nationally coordinated database with this information in, so it would be possible to see who has the best deals with publishers in this respect.  This is a role it was suggested for libraries, and be the advisory point within the university for publishing in OA, or publishing in full stop.
  • Universities should recommend average APCs for each discipline, so it would be possible for authors to see where they were paying over the odds.  And where academics went over the limit, they would have to make a case justifying this – thus allowing other options to be presented to them as well, rather than simply publishing irrespective of cost.
  • Fears over burning through APC funds at some institutions too quickly.  If universities came in under-costs, then this APC funding could be released back to the research funding streams.


Some universities present admitted they were going to be as open as they could be about their funding received and expended, although the level of granularity would vary.  It was suggested that the RCUK would not be as transparent as individual universities would be in terms of funding levels.  The problem though was that the value quoted and the value paid for APCs could vary due to the time between submission and invoice, and fluctuations in publisher policy and currency exchange.  It was highlighted that some publishing work-flows drawn up by some universities locked academics in the route of publishing down more expensive gold routes rather than cheaper gold or green options as a matter of policy.  However, other universities countered this with a policy that went down the route of achieving funder policy satisfaction without necessarily taking the “easy� APC route.

Participants in the Open Access and Academia Round Table

Participants in the Open Access and Academia Round Table.

Embedding OA

  • OA should pervade academic thinking at all stages – from teaching 1st Year Undergrads, or information literacy training.  Awareness raising and education about scholarly dissemination should be embraced at every opportunity.
  • Getting the finance and research office staff educated about deals and options is key.  Researchers might only make the odd publication, but these people deal with the wider institutional publishing framework.  If they know about the options they might be able to help steer academics down an OA route that might otherwise be missed.
  • Better terminology – green and gold cited as needed to be gotten rid of as terms.  Not helped by publishers who it was suggested espoused different views or terms of what OA is, which added to the confusion of academics.
  • FOI suggested helps this process, as researchers might have to consider clarity in their work in advance of potential openness down the line.
  • Getting independent researchers, those based outside of universities aware and involved in OA was important.  Currently we lack a critical mass of OA contents, but in time this could be reached and help them to become aware of it.
  • Call for lobbying from academia to lobby for clarity from publishers and funders and attempt to use our influence on academics to help standardised the way they talk about OA.
Image of the whiteboard from the  Round Table session with discussion topics and key points listed.

The Round Table whiteboard with discussion topics and key points listed.

By the end had we reached utopia?  Sadly no, if anything I think we’d underscored the long way there is to go to achieve the final evolution of open scholarly discourse.  There are a lot of issues, but at least within the room there was a collegiality and positivity about working towards perhaps achieving some of the goals.  Were we to revisit this workshop next fringe – it would be indeed interesting post-REF to see what steps towards achieving some of these hopes, dreams and ideas had actually been made within the UK.

My thanks for our wonderful delegates for their thoughts, the Fringe organisers for giving us the space the run this session; and last but most certainly not least my co-workshop chair Dominic Tate without whom none of this would have been possible.


Getting to the Repository of the Future – reflections

A week after the Getting to the Repository of the Future workshop, it is useful to reflect on what thoughts emerged from the event that we can take forward.  The workshop itself was very helpfully blogged by resident RepoFringe bloggers Rocio and Nancy, which captures many of the points raised.  There was also a follow-on round table discussion held the day after, from which additional ideas and suggestions emerged.  All contributions are being written up into a document to inform Jisc in their planning, but will also be openly reflected back to inform conversations back home within institutions and elsewhere.

By way of continuing the discussion online, I reflect here my own initial thoughts and conclusions from the discussion.  Feedback very welcome.

  • Repositories will become capable of dealing with content types according to their needs

Repositories have been established to manage many different types of material, with probably the largest focus being around research articles.  Nonetheless, with digital content collections of all sorts growing and needing better management, can repositories cope with this?  Discussion suggested that we have a technology available to us that can be used for a variety of use cases, and so can usefully be exploited in this way.  In doing so, though, it was recognised that we need to better understand what it means to manage different types of material so this exploitation can take place effectively and add to the value of the content.  As to type of repository, it should be recognised where materials benefit from being managed through specific repositories rather than a local repository, e.g., managing software code through GitHub or BitBucket, or holding datasets in specific data centres.  Overall message emerging: understand more how to deal with different types of content, be realistic about where they are best managed as part of this.

  • Repositories will move beyond being a store of PDFs to enable re-use to a greater extent

It was one very specific comment at the workshop that highlighted that many repositories are simply a store of PDF files (there was also a debate about whether repositories holding metadata are real repositories, but that’s another discussion).  PDF files can be re-usable if generated in the right way (i.e., are not just page images), but are never ideal.  Part of the added value that repositories can bring is facilitating re-use, and enabling the benefits that come from this.  To do this we need to move to a position where we can effectively either store non-PDF versions instead or alongside, or identify ways of storing non-PDF files by default.  The view expressed was that if we don’t address this we risk our repositories becoming silos of content with limited use.

  • Repositories will benefit greatly from linked data, but we need persistent identifiers to be better established and standardised

There is a chicken and egg aspect to this, as there is with a lot of linked data activity.  Content is exposed as linked data, but is not then consumed as much as might be anticipated, in part because the linked data doesn’t use recognised standards, and in particular standard identifiers, in its expression.  These weren’t used because there wasn’t enough activity within the community to inform a standard to use, or there are a number of different standards but a lack of an authoritative one.  One example is a standard list of organisational identifiers: there are a few in existence, but a need to bring these together, a task that Jisc is currently investigating.  Repositories could make use of linked data if the standards existed, but where is the impetus to create them?  An opposing view to this is that the standards pretty much do exist, it is more a matter of raising awareness of the options and opportunities in how these can effectively used within repositories, e.g., ORCID, which is now starting to gain traction, or the Library of Congress subject headings.  Whichever view you take, linked data screams ‘potential’, and there was little doubt that it will become part of the repository landscape in a far greater way than it does today.

  • Repositories will focus on holding material and preserving it, leaving all other functions to services built around the repository / Repositories will become invisibly integrated within user-facing services

At first site this theme appears to suggest that we reduce a repository, which seems to contradict the benefits that the previous statements suggest.  Discussion at the workshop, though, saw this more as getting repositories to play to their strengths; we need somewhere to store and preserve digital ‘stuff’, using a digital repository as the equivalent to print repositories.  Of course it can be held in a way that allows it to be exploited through other services, but should we not focus on what a repository does really well rather than become application managers as well?  Discuss.  In taking this line, we enable content to be made available from the repository (a ‘lake of content’ as expressed by one workshop attendee) wherever it is needed; do users need to know where it came from?  Issues of perceived value clearly raise their head here given the battles to establish repositories in the first place, and moving in the suggested direction will certainly require attention to this with budget-holders.  But for users this was felt to make sense.  One approach suggested was to consider repository as infrastructure rather than application, as this may change views of the support required.

  • Repositories will be challenged by other systems offering similar capability / Repositories will develop ways of demonstrating their impact

This theme was a natural follow-on to the previous one.  The debate about CRIS’s storing content, or VLEs for that matter, seems high on the agenda in affected institutions, and will no doubt continue.  This suggests a need for clarity in the role of each system, and an understanding of their respective benefit and impact for the institution in how they work together.  We cannot take repositories for granted, though the general perception at the workshop was that they have huge value (biased audience I know, but one with experience) and we need to continue identifying how we demonstrate that to best serve our institutional needs.

So, a full afternoon.  No blinding flashes of inspiration, perhaps, but some useful staging posts against which we can plot the future course of repositories in the next 2, 5, 10, etc years.  Repositories will only be what they are then because of what we choose to do now.

My main general takeaways from the workshop:

  • The role and need for a repository as a place to manage digital ‘stuff’ seems well accepted and here to stay


  • There is a need for re-stating and defining the clarity of purpose for our individual repositories, and taking ownership/leadership in how they develop
  • No specific gaps were perceived – we know what we wish to achieve with repositories, we just need a way of doing it


  • We need to clarify the barriers getting in the way and look at ways of overcoming them

What are your thoughts?  Or, indeed, what processes would work best to address these points (both institutionally and across the community)?


Reflecting Back on the Fringe

I’ve written elsewhere about my hopes and expectations prior to attending the event as a whole, and you can read about the sessions in full on the Fringe Blog, should you wish to get a taste of the whole event.  Since there have been so many wonderful posts on the individual sessions I’m not going to duplicate the efforts, rather I’ll just share some key points I learned.

  • There is still a lot of very active development going on and around repositories – that hasn’t been totally subsumed by the REF and CRISes.
  • There is a real feeling of positivity engendered by people working in this sector.  They have very tough jobs, but they all seem to relish it.
  • A sung paper is a thing of joy and delight – more unusual presentations next time please (for the Gen Y and Z people at least!)
  • The fear of cocking up a REF submission is paramount for many repository managers.  The REF has given them a greater institutional value and prominence, but greater risks come with greater reward.
  • Symplectic isn’t a CRIS.  Better not tell my old bosses that, they’d be most upset.
  • SWORD works a treat to populate a repository from external sources.
  • Metadata is either a complete waste of time or the most critical element.  Honestly, I’m still not sure which way to jump on that one.
  • Few digital systems last longer than 15 years (except in the NHS) so planning for sustainability beyond that is a futile activity.
  • Nicola’s team at Edinburgh puts on an excellent conference and makes it look effortless.  Thanks and well done!

And my favourite quote from the whole event

  • Speaker “So, how long is your repository going to last?”
  • Audience member “Probably until the end of the REF.”

Not many months now to see how true that one is – will repositories suddenly dip off the radar in November or will the REF2020 help keep their light shining brightly?



Developer Challenge: The Results

The winners of the developer challenge were announced during the Show & Tell Session just before the closing keynote.

The top price went to Russell Boyatt for his Preserving a MOOC toolkit. This idea fits very well with the preservation theme of this developer challenge. As Universities are putting more resources in deploying MOOC, it is very appropriate that capturing the social interaction generated by students and their tutors should become a priority in order to enable future analysis, feedback and validation of MOOCs. This hack was therefore very timely and inspired our judges. It provided them with a take home message – let’s do more to save MOOCs interaction data and we must do it now!

There were two runner-ups:

  • The Image Liberation Team made of Peter Murray-Rust and Cesare Bellini which overlays the license type on top of an image.
  • ePrints plug-in for image copyright by Chris Gutteridge which adds a license and copyright to a image in ePrints.

There were an additional two entries:

  • The Preservation Toolkit from Patrick McSweeney which provides a webservice for file format conversion.
  • The Metadata Creator from Richard Wincewicz which extracts the metadata embedded in PDF files.

These last four hacks are all about improving metadata, its quality and ease of capture. This give a strong signal as to what is a major concern for repositories and their users.

These were all very interesting and exciting hacks! It was a challenge in itself for the judges to reach a decision and award the prizes. They had to take into account the novelty, relevance, potential of the idea and balance it with the production of code during Repository Fringe. Not an easy task! Thanks again to our four judges, Paul Walk, Bea Alex, Stuart Lewis and Padmini Ray-Murray, for their excellent job!

What struck me most during the 24 hours of the challenge is that most developers were happy to enter a hack but didn’t want to win!  Maybe it was the lack of time to dedicate to the coding due to the Repository Fringe sessions running in parallel, ‘it’s not fully working yet‘. Maybe it was that some of them had won previous challenges, ‘been there, done it and got the T-shirt‘. Maybe it was the lack of a new generation of coders to compete with, ‘where is the new blood?‘. Maybe prizes are not the main motivation.

The feeling was that the challenge should come from the questions to be answered rather than the competition with other developers. There was a demand for a different type of event where developers could work together to solve problems that would be set as goals. This would provide a chance for developers to collaborate, learn from each others and code solutions to important and current issues. The opportunity to learn and demonstrate theirs skills seem more valuable to the developers than a prize money. It is more important to have fun, meet other people and build a developer community. Back to basics! I couldn’t agree more.