About Robin Rice

Robin on Twitter: Sparrowbarley

Data Carpentry & Software Carpentry workshops

The Research Data Service hosted back to back 2-day workshops in the Main Library this week, run by the Software Sustainability Institute (SSI) to train University of Edinburgh researchers in basic data science and research computing skills.

Learners at Data Carpentry workshop

Learners at Data Carpentry workshop

Software Carpentry (SC) is a popular global initiative originating in the US, aimed at training researchers in good practice in writing, storing and sharing code. Both SC and its newer offshoot, Data Carpentry, teaches methods and tools that helps researchers makes their science reproducible. The SSI, based at Edinburgh Parallel Computing Centre (EPCC), organises workshops for both throughout the UK.

Martin Callaghan, University of Leeds

Martin Callaghan, University of Leeds, introduces goals of Data Carpentry workshop.

Each workshop is taught by trainers trained by the SC organisation, using proven methods of delivery, to learners using their own laptops, and with plenty of support by knowledgeable helpers. Instructors at our workshops were from Leeds and EPCC. Comments from the learners – staff and postgraduate students from a range of schools, included, ‘Variety of needs and academic activities/disciplines catered for. Useful exercies and explanations,’ and ‘Very powerful tools.’

Lessons can vary between different workshops, depending on the level of the learners and their requirements, as determined by a pre-workshop survey. The Data Carpentry workshop on Monday and Tuesday included:

  • Using spreadsheets effectively
  • OpenRefine
  • Introduction to R
  • R and visualisation
  • Databases and SQL
  • Using R with SQLite
  • Managing Research & Data Management Plans

The Software Carpentry workshop was aimed at researchers who write their own code, and covered the following topics:

  • Introduction to the Shell
  • Version Control
  • Introduction to Python
  • Using the Shell (scripts)
  • Version Control (with Github)
  • Open Science and Open Research
Software Carpentry learners

Software Carpentry learners

Clearly the workshops were valued by learners and very worthwhile. The team will consider how it can offer similar workshops in the future at a similarly low cost; your ideas welcome!

Robin Rice
EDINA and Data Library

Share

New MOOC! Research Data Management and Sharing

[Guest post from Dr. Helen Tibbo, University of North Carolina-Chapel Hill]

The School of Information and Library Science and the Odum Institute at the University of North Carolina-Chapel Hill and the MANTRA team at the University of Edinburgh are pleased to announce the forthcoming Coursera MOOC (Massive Open Online Course), Research Data Management and Sharing.

CaptureThis is a collaboration of the UNC-CH CRADLE team (Curating Research Assets and Data Using Lifecycle Education) and MANTRA. CRADLE has been funded in part by the Institute of Museum and Library Services to develop training for both researchers and library professionals. MANTRA was designed as a prime resource for postgraduate training in research data management skills and is used by learners worldwide.

The MOOC uses the Coursera on-demand format to provide short, video-based lessons and assessments across a five-week period, but learners can proceed at their own pace. Although no formal credit is assigned for the MOOC, Statements of Accomplishment will be available to any learner who completes a course for a small fee.

The Research Data Management and Sharing MOOC will launch 1st March, 2016, and enrolment is open now. Subjects covered in the 5-week course follow the stages of any research project. They are:

  • Understanding Research Data
  • Data Management Planning
  • Working with Data
  • Sharing Data
  • Archiving Data

Dr. Helen Tibbo from the School of Information and Library Science (SILS) at the University of North Carolina at Chapel Hill delivers four of the five sets of lessons, and Sarah Jones, Digital Curation Centre, delivers the University of Edinburgh-developed content in Week 3 (Working with Data). Quizzes and supplementary videos add to the learning experience, and assignments are peer reviewed by fellow learners, with questions and answers handled by peers and team teachers in the forum.

Staff from both organizations will monitor the learning forums and the peer-reviewed assignments to make sure learners are on the right track, and to watch for adjustments needed in course content.

The course is open to enrolment now, and will ‘go live’ on 1st March.
https://www.coursera.org/learn/research-data-management-and-sharing

Hashtag: #RDMSmooc

A preview of one of the supplementary videos is now available on Youtube:
www.youtube.com/watch?v=yhVqImna7cU

Please join us in this data adventure.
-Helen

Dr. Helen R. Tibbo, Alumni Distinguished Professor
President, 2010-2011 & Fellow, Society of American Archivists
School of Information and Library Science
201 Manning Hall, CB#3360
University of North Carolina at Chapel Hill
Chapel Hill, NC 27599-3360
Tel: 919-962-8063
Fax: 919-962-8071
tibbo@ils.unc.edu

Share

MANTRA @ Melbourne

The aim of the Melbourne_MANTRA project was to review, adapt and pilot an online training program in research data management (RDM) for graduate researchers at the University of Melbourne. Based on the UK-developed and acclaimed MANTRA program, the project reviewed current UK content and assessed its suitability for the Australian and Melbourne research context. The project team adapted the original MANTRA modules and incorporated new content as required, in order to develop the refreshed Melbourne_MANTRA local version. Local expert reviewers ensured the localised content met institutional and funder requirements. Graduate researchers were recruited to complete the training program and contribute to the detailed evaluation of the content and associated resources.

The project delivered eight revised training modules, which were evaluated as part of the pilot via eight online surveys (one for each module) plus a final, summative evaluation survey. Overall, the Melbourne_MANTRA pilot training program was well received by participants. The content of the training modules generally gathered high scores, with low scores markedly sparse across all eight modules. The participants recognised that the content of the training program should be tailored to the institutional context, as opposed to providing general information and theory around the training topics. In its current form, the content of the modules only partly satisfies the requirements of our evaluators, who made valuable recommendations for further improving the training program.

In 2016, the University of Melbourne will revisit MANTRA with a view to implement evaluation feedback into the program; update the modules with new content, audiovisual materials and exercises; augment targeted delivery via the University’s LMS; and work towards incorporating Melbourne_MANTRA in induction and/or reference materials for new and current postgraduates and early career researchers.

The current version is available at: http://library.unimelb.edu.au/digitalscholarship/training_and_outreach/mantra2

Dr Leo Konstantelos
Manager, Digital Scholarship
Research | Research & Collections
Academic Services
University of Melbourne
Melbourne, Australia

Share

Jisc Data Vault update

Posted on behalf of Claire Knowles

Research data are being generated at an ever-increasing rate. This brings challenges in how to store, analyse, and care for the data. Part of this problem is the long term stewardship of researchers’ private data and associated files that need a safe and secure home for the medium to long term.

PrintThe Data Vault project, funded by the Jisc #DataSpring programme seeks to define and develop a Data Vault software platform that will allow data creators to describe and store their data safely in one of the growing number of options for archival storage. This may include cloud solutions, shared storage systems, or local infrastructure.

Future users of the Data Vault are invited to Edinburgh on 5th November, to help shape the development work through discussions on: use cases, example data, retention policies, and metadata with the project team.

Book your place at: https://www.eventbrite.co.uk/e/data-vault-community-event-edinburgh-tickets-18900011443

The aims of the second phase of the project are to deliver a first complete version of the platform by the end of November, including:

  • Authentication and authorisation
  • Integration with more storage options
  • Management / monitoring interface
  • Example interface to CRIS (PURE)
  • Development of retention and review policy
  • Scalability testing

Working towards these goals the project team have had monthly face-to-face meetings, with regular Skype calls in between. The development work is progressing steadily, as you can see via the Github repository: https://github.com/DataVault, where there have now been over 300 commits. Progress is also tracked on the open Project Plan where anyone can add comments.

So remember, remember the 5th November and book your ticket.

Claire Knowles, Library & University Collections, on behalf of the JISC Data Vault Project Team

Share

Research Data Alliance – report from the 6th Plenary

The Research Data Alliance or RDA is growing about as fast as the data all around us. It got off the ground in 2012 with the support of major research funders in Europe, the US and Australia and has since grown to over 3,000 members. The latest plenary in Paris set a new registration record of ~700 ‘data folk’ including data scientists, data managers, librarians and policy-makers. The theme was Enterprise Engagement with a focus on Research Data for Climate Change.

Not an ordinary conference

What sets RDA apart from other data-related organisations is not just the size of its gatherings, but its emphasis on making change. Parallel sessions are not filled with individual presentations of research papers, but of collaborative activities that lead to outputs that can be used in the real world. Working groups are approved by governance structures that coalesce around actual problems that cannot be solved by individual organisations but require new top-level approaches. They are required to produce their deliverables and close shop after an 18 month period. Interest groups are allowed to exist longer, but are encouraged to spin off working groups to address changes as they are identified through group discussion.

Hard-working groups

Since 2012, these working groups have produced some impressive deliverables and pilots that if implemented across the Web and across organisations and countries could speed up research and improve reproducibility. They are governed by an elected group of experts, worldwide. Some current active projects are:

  • Data Foundation and Terminology WG: defining harmonised terminology for diverse communities used to their own data ‘language’
  • Data Type Registries WG: building software to implement a DTR that can automatically match up unknown dataset ‘types’ with relevant services or applications (such as a viewer)
  • PID Information Types WG: Creating a single common API for delivering checksums from multiple persistent identifier service providers (DataCite and others)
  • Practical policy WG: building on a previous WG that collected various machine-actionable policies practiced by different data centres and repositories, this group will register the policies to move repository managers to move towards a harmonised set.
  • Scalable Dynamic Data Citation WG: to solve the difficulty of properly citing dynamic data sources, the recommended solution allows users to re-execute a query with the original time stamp and retrieve the original data or to obtain the current version of the data.
  • Data Description Registry Interoperability WG: to solve the problem of scattered datasets across repositories and data registries, the group build Research Data Switchboard linking datasets across platforms.
  • Metadata Standards Directory WG: By guiding researchers towards the metadata standards and tools relevant to their discipline, the directory drives up adoption of those standards, improving the chances of future researchers finding and using the data.

Members of the RDM team have been involved in library and repository-related interest groups and Birds of a Feather groups, where surveys of current practice have circulated.

Not all men at RDA! Dame Wendy Hall from the Web Science Institute leads a Women's Networking Breakfast

Not all men at RDA! Dame Wendy Hall from the Web Science Institute leads a Women’s Networking Breakfast – photo courtesy of @RDA_Europe

RDA and climate change

Climate science was prominent in the 6th RDA plenary. This was not only due to the imminent Paris-based United Nations COP talks, but indeed due to issues of critical importance for the world today. For some years, driven by the climate model inter-comparison work underpinning Intergovernmental Panel on Climate Change (IPCC) reports and the massive datasets from Earth observation climate science has been located at an intersection of high performance computing, big data management, and services to support and stimulate research, commerce, and governmental initiatives.

Assessment of the risks posed by climate change, and strategies for adaptation and mitigation sharpens the need to solve not only the technical problems of bringing together diverse data (social, soil, climate, land-use, commercial,…) but also to address the policy challenges, given the diverse organisations needing to cooperate. This is a domain that builds on services to give access to data, for computation close to data enabled by e-infrastructure (such as EGI), and one that requires ever stronger approaches to brokering these resources and services, to permit their orchestration and integration.

Among initiatives presented in the climate-related sessions were:

  • GEOSS – The GEOSS Common Infrastructure allows the user of Earth observations to access, search and use the data, information, tools and services available through the Global Earth Observation System of Systems
  • Global Agricultural Monitoring (GEOGLAM) initiative in response to the growing calls for improved agricultural information.
  • An RDS group focused on wheat – the volatility in prices, in part driven by climate unpredictability, has become a major concern.
  • The IPSL Mesocentre
  • IS-ENES developing services for climate modelling especially
  • Copernicus, seeking to “support policymakers, business, and citizens with improved environmental information. Copernicus integrates satellite and in-situ data with modeling to provide user-focused information servicesâ€�
  • CLIPC will provide access to climate datasets, and software and information to assess indicators for climate impact.

Dr. Mike Mineter, School of GeoSciences and Robin Rice, EDINA and Data Library

 

 

Share

Edinburgh DataShare – new features for users and depositors

I was asked recently on Twitter if our data library was still happily using DSpace for data – the topic of a 2009 presentation I gave at a DSpace User Group meeting. In responding (answer: yes!) I recalled that I’d intended to blog about some of the rich new features we’ve either adopted from the open source community or developed ourselves to deliver our data users and depositors a better service and fulfill deliverables in the University’s Research Data Management Roadmap.

Edinburgh DataShare was built as an output of the DISC-UK DataShare project, which explored pathways for academics to share their research data over the Internet at the Universities of Edinburgh, Oxford and Southampton (2007-2009). The repository is based on DSpace software, the most popular open source repository system in use, globally.  Managed by the Data Library team within Information Services, it is now a key component in the UoE’s Research Data Programme, endorsed by its academic-led steering group.

An open access, institutional data repository, Edinburgh DataShare currently holds 246 datasets across collections in 17 out of 22 communities (schools) of the University and is listed in the Re3data Registry of Research Data Repositories and indexed by Thomson-Reuters’ Data Citation Index.

Last autumn, the university joined DataCite, an international standards body that assigns persistent identifiers in the form of Digital Object Identifiers (DOIs) to datasets. DOIs are now assigned to every item in the repository, and are included in the citation that appears on each landing page. This helps to ensure that even after the DataShare system no longer exists, as long as the data have a home, the DOI will be able to direct the user to the new location. Just as importantly, it helps data creators gain credit for their published data through proper data citation in textual publications, including their own journal articles that explain the results of their data analyses.

CaptureThe autumn release also streamlined our batch ingest process to assist depositors with large and voluminous data files by getting around the web upload front-end. Currently we are able to accept files up to 10 GB in size but we are being challenged to allow ever greater file sizes.

Making the most of metadata

Discover panel screenshot

Example from Geosciences community

Every landing page (home, community, collection) now has a ‘Discover’ panel giving top hits for each metadata field (such as subject classification, keyword, funder, data type, spatial coverage). The panel acts as a filter when drilling down to different levels,  allowing the most common values to be ‘discovered’ within each section.

 

 

 

 

 

The usage statistics at each level  are now publicly viewable as well, so depositors and others can see how often an item is viewed or downloaded. This is useful for many reasons. Users can see what is most useful in the repository; depositors can see if their datasets are being used; stakeholders can compare the success of different communities. By being completely open and transparent, this is a step towards ‘alt-metrics’ or alternative ways measuring scholarly or scientific impact. The repository is now also part of IRUS-UK, (Institutional Repository Usage Statistics UK), which uses the COUNTER standard to make repository usage statistics nationally comparable.

What’s coming?

Stay tuned for future improvements around a new look and feel, preview and display by data type, streaming support, bittorent downloading, and Linked Open Data.

Robin Rice
EDINA and Data Library

Share

Data Vault project kickoff meeting

Last week, members of the Data Vault project got together for the kickoff meeting.  Hosted at the University of Manchester Library, we were able to discuss the project plan, milestones for the three month project, agreed terminology for parts of the system, and started to assign tasks to project members for the first month.

Being only three months long, the project is being run in three one-month chunks. These are defined as follows:

  1. Month 1: Define and Investigate: This phase will allow us to agree what the Data Vault should do, and how it does it,  Specifically it will look at:
    1. What are the use cases for the Data Vault
    2. How do we describe the system (create overview diagrams)
    3. How should the data be packed (metadata + data) for long term archival storage
    4. Develop example workflows for how the Data Vault could be used in the research process
    5. Examine the capabilities of archival storage systems to ensure they can support the proposed Data Vault
  2. Month 2: Requirements and Design: This phase will create the requirements specification and initial design of the system:
    1. Define the requirements specification
    2. Use the requirement specification to design the Data Vault system
  3. Month 3: Develop a Proof of Concept: This phase will seek to develop a minimal proof of concept that demonstrates the concept of the Data Vault:
    1. Deliver a working proof of concept that can describe and archive some data, and then retrieve it

At the end of month three, we will prepare for the second Jisc Data Spring sandpit workshop where we will seek to extend the project to take the prototype and develop it into a full system.

All of this is being documented in the project plan, which is a ‘living document’ that is constantly evolving as the project progresses.  The plan is online as a Google Document:

Look out for further blog posts during the month as we undertake the definitions and investigations!

Kickoff meeting

Originally posted on the Jisc Data Vault blog, April 7, 2015 by Stuart Lewis, Deputy Director, Library & University Collections.

Share

New data analysis and visualisation service

Statistical Analysis without Statistical Software

The Data Library now has an SDA server (Survey Documentation and Analysis), and is ready to load numeric data files for access by either University of Edinburgh users only, or ‘the world’. The University of Edinburgh SDA server is available at: http://stats.datalib.edina.ac.uk/sdaweb/

SDA provides an interactive interface, allowing extensive data analysis with significance tests. It also offers the ability to download user-defined subsets with syntax files for further analysis on your platform of choice.

SDA can be used to teach statistics, in the classroom or via distance-learning, without having to teach syntax. It will support most statistical techniques taught in the first year or two of applied statistics. There is no need for expensive statistical packages, or long learning curves. SDA has been awarded the American Political Science Association Best Instructional Software.

For data producers concerned about disclosure control, SDA provides the capability of defining usage restrictions on a variable-by-variable basis. For example, restrictions on minimum cell sizes (weighted or unweighted), use of particular variables without being collapsed (recoded), or restrictions on particular bi- or multivariate combinations.

For data managers and those concerned about data preservation, SDA can be used to store data files in a generic, non-software dependant format (fixed-field format ASCII), and includes capability of producing the accompanying metadata in the emerging DDI-standard XML format.

Data Library staff can mount data files very quickly if they are well documented with appropriate metadata formats (eg SAS or SPSS), depending on access restrictions appertaining to the datafile. To request a datafile be made available in SDA, contact datalib@ed.ac.uk.

Laine Ruus
EDINA and Data Library

Share

Leading a Digital Curation ‘Lifestyle’: First day reflections on IDCC15

[First published on the DCC Blog, republished here with permission.]

Okay that title is a joke, but an apt one to name a brief reflection of this year’s International Digital Curation Conference in London this week, with the theme of looking ten years back and ten years forward since the UK Digital Curation Centre was founded.

The joke references an alleged written or spoken mistake someone made in referring to the Digital Curation lifecycle model, gleefully repeated on the conference tweetstream (#idcc15). The model itself, as with all great reference works, both builds on prior work and was a product of its time – helping to add to the DCC’s authority within and beyond the UK where people were casting about for common language and understanding in this new terrain of digital preservation, data curation, and – a perplexing combination of terms which perhaps still hasn’t quite taken off, ‘digital curation’ (at least not to the same extent as ‘research data management’). I still have my mouse-mat of the model and live with regrets it was never made into a frisbee.

Digital Curation Lifecycle

They say about Woodstock that ‘if you remember it you weren’t really there’, so I don’t feel too bad that it took Tony Hey’s coherent opening plenary talk to remind me of where we started way back in 2004 when a small band under the directorship of Peter Burnhill (services) and Peter Buneman (research) set up the DCC with generous funding from Jisc and EPSRC. Former director Chris Rusbridge likes to talk about ‘standing on the shoulders of giants’ when describing long-term preservation, and Tony reminded us of the important, immediate predecessors of the UK e-Science Programme and the ground-breaking government investment in the Australian National Data Service (ANDS) that was already changing a lot of people’s lifestyles, behaviours and outlooks.

Traditionally the conference has a unique format that focuses on invited panels and talks on the first day, with peer-reviewed research and practice papers on the second, interspersed with demos and posters of cutting edge projects, followed by workshops in the same week. So whilst I always welcome the erudite words of the first day’s contributors, at times there can be a sense of, ‘Wait – haven’t things moved on from there already?’ So it was with the protracted focus on academic libraries and the rallying cries of the need for them to rise to the ‘new’ challenges during the first panel session chaired by Edinburgh’s Geoffrey Boulton, focused ostensibly on international comparisons. Librarians – making up only part of the diverse audience – were asking each other during the break and on twitter, isn’t that exactly what they have been doing in recent years, since for example, the NSF requirements in the States and the RCUK and especially EPSRC rules in the UK, for data management planning and data sharing? Certainly the education and skills of data curators as taught in iSchools (formerly Library Schools) has been a mainstay of IDCC topics in recent years, this one being no exception.

But has anything really changed significantly, either in libraries or more importantly across academia since digital curation entered the namespace a decade ago? This was the focus of a panel led by the proudly impatient Carly Strasser, who has no time for ‘slow’ culture change, and provocatively assumes ‘we’ must be doing something wrong. She may be right, but the panel was divided. Tim DiLauro observed that some disciplines are going fast and some are going slow – depending on whether technology is helping them get the business of research done. And even within disciplines there are vast differences –-perhaps proving the adage that ‘the future is here, it’s just not distributed yet’.

panel session

Geoffrey Bilder spoke of tipping points by looking at how recently DOIs (Digital Object Identifiers, used in journal publishing) meant nothing to researchers and how they have since caught on like wildfire. He also pointed blame at the funding system which focuses on short-term projects and forces researchers to disguise their research bids as infrastructure bids – something they rightly don’t care that much about in itself. My own view is that we’re lacking a killer app, probably because it’s not easy to make sustainable and robust digital curation activity affordable and time-rewarding, never mind profitable. (Tim almost said this with his comparison of smartphone adoption). Only time will tell if one of the conference sponsors proves me wrong with its preservation product for institutions, Rosetta.

It took long-time friend of the DCC Clifford Lynch to remind us in the closing summary (day 1) of exactly where it was we wanted to get to, a world of useful, accessible and reproducible research that is efficiently solving humanity’s problems (not his words). Echoing Carly’s question, he admitted bafflement that big changes in scholarly communication always seem to be another five years away, deducing that perhaps the changes won’t be coming from the publishers after all. As ever, he shone a light on sticking points, such as the orthogonal push for human subject data protection, calling for ‘nuanced conversations at scale’ to resolve issues of data availability and access to such datasets.

Perhaps the UK and Scotland in particular are ahead in driving such conversations forward; researchers at the University of Edinburgh co-authored a report two years ago for the government on “Public Acceptability of Data Sharing Between the Public, Private and Third Sectors for Research Purposes,� as a pre-cursor to innovations in providing researchers with secure access to individual National Health Service records linked to other forms of administrative data when informed consent is not possible to achieve.

Given the weight of this societal and moral barrier to data sharing, and the spread of topics over the last 10 years of conferences, I quite agree with Laurence Horton, one of the panelists, who said that the DCC should give a particular focus to the Social Sciences at next year’s conference.

Robin Rice
Data Librarian (and former Project Coordinator, DCC)
University of Edinburgh

Share

Open up! On the scientific and public benefits of data sharing

Research published a year ago in the journal Current Biology found that 80 percent of original scientific data obtained through publicly-funded research is lost within two decades of publication. The study, based on 516 random journal articles which purported to make associated data available, found the odds of finding the original data for these papers fell by 17 percent every year after publication, and concluded that “Policies mandating data archiving at publication are clearly needed� (http://dx.doi.org/10.1016/j.cub.2013.11.014).

In this post I’ll touch on three different initiatives aimed at strengthening policies requiring publicly funded data – whether produced by government or academics – to be made open. First, a report published last month by the Research Data Alliance Europe, “The Data Harvest: How sharing research data can yield knowledge, jobs and growth.�  Second, a report by an EU-funded research project called RECODE on “Policy Recommendations for Open Access to Research Data�, released last week at their conference in Athens.  Third, the upcoming publication of Scotland’s Open Data Strategy, pre-released to attendees of an Open Data and PSI Directive Awareness Raising Workshop Monday in Edinburgh.

Experienced so close together in time (having read the data harvest report on the plane back from Athens in between the two meetings), these discrete recommendations, policies and reports are making me just about believe that 2015 will lead not only to a new world of interactions in which much more research becomes a collaborative and integrative endeavour, playing out the idea of ‘Science 2.0’ or ‘Open Science’, and even that the long-promised ‘knowledge economy’ is actually coalescing, based on new products and services derived from the wealth of (open) data being created and made available.

‘The initial investment is scientific, but the ultimate return is economic and social’

John Wood, currently the Co-Chair of the global Research Data Alliance (RDA) as well as Chair of RDA-Europe, set out the case in his introduction to the Data Harvest report, and from the podium at the RECODE conference, that the new European commissioners and parliamentarians must first of all, not get in the way, and second, almost literally ‘plan the harvest’ for the economic benefits that the significant public investments in data, research and technical infrastructure are bringing.

CaptureThe report’s irrepressible argument goes, “Just as the World Wide Web, with all its associated technologies and communications standards, evolved from a scientific network to an economic powerhouse, so we believe the storing, sharing and re-use of scientific data on a massive scale will stimulate great new sources of wealth.â€� The analogy is certainly helped by the fact that the WWW was invented at a research institute (CERN), by a researcher, for researchers. The web – connecting 2 billion people, according to a McKinsey 2011 report, contributed more to GDP globally than energy or agriculture. The report doesn’t shy away from reminding us and the politicians it targets, that it is the USA rather than Europe that has grabbed the lion’s share of economic benefit– via Internet giants Google, Amazon, eBay, etc. – from the invention of the Web and that we would be foolish to let this happen again.

This may be a ruse to convince politicians to continue to pour investment into research and data infrastructure, but if so it is a compelling one. Still, the purpose of the RDA, with its 3,000 members from 96 countries is to further global scientific data sharing, not economies. The report documents what it considers to be a step-change in the nature of scientific endeavour, in discipline after discipline. The report – which is the successor to the 2010 report also chaired by Wood, “Riding the Wave: How Europe can gain from the rising tide of scientific data,” celebrates rather than fears the well-documented data deluge, stating,

“But when data volumes rise so high, something strange and marvellous happens: the nature of science changes.�

The report gives examples of successful European collaborative data projects, mainly but not exclusively in the sciences, such as the following:

  • Lifewatch – monitors Europe’s wetlands, providing a single point to collect information on migratory birds. Datasets created help to assess the impact of climate change and agricultural practices on biodiversity
  • Pharmacog – partnership of academic institutions and pharmaceutical companies to find promising compounds for Alzheimer’s research to avoid expensive late-stage failures of drugs in development.
  • Human Brain Project – multidisciplinary initiative to collect and store data in a standardised and systematic way to facilitate modelling.
  • Clarin – integrating archival information from across Europe to make it discoverable and usable through a single portal regardless of language.

The benefits of open data, the report claims, extends to three main groups:

  • to citizens, who will benefit indirectly from new products and services and also be empowered to participate in civic society and scientific endeavour (e.g. citizen science);
  • to entrepeneurs, who can innovate based on new information that no one organisation has the money or expertise to exploit alone;
  • to researchers, for whom the free exchange of data will open up new research and career opportunities, allow crossing of boundaries of disciplines, institutions, countries, and languages, and whose status in society will be enhanced.

‘Open by Default’

If the data harvest report lays out the argument for funding open data and open science, the RECODE policy recommendations focus on what the stakeholders can do to make it a reality. The project is fundamentally a research project which has been producing outputs such as disciplinary case studies in physics, health, bioengineering, environment and archaeology. The researchers have examined what they consider to be four grand challenges for data sharing.

  • Stakeholder values and ecosystems: the road towards open access is not perceived in the same way by those funding, creating, disseminating, curating and using data.
  • Legal and ethical concerns: unintended secondary uses, misappropriation and commercialization of research data, unequal distribution of scientific results and impacts on academic freedom.
  • Infrastructure and technology challenges: heterogeneity and interoperability; accessibility and discoverability; preservation and curation; quality and assessibility; security.
  • Institutional challenges: financial support, evaluating and maintaining the quality, value and trustworthiness of research data, training and awareness-raising on opportunities and limitations of open data.

Capture1RECODE gives overarching recommendations as well as stake-holder specific ones, a ‘practical guide for developing policies’ with checklist for the four major stakeholder groups: funders, data managers, research institutions and publishers.

‘Open Changes Everything’

The Scottish government event was a pre-release of the  open data strategy, which is awaiting final ministerial approval, though in its final draft, following public consultation. The speakers made it clear that Scotland wants to be a leader in this area and drive culture change to achieve it. The policy is driven in part by the G8 countries’ “Open Data Charterâ€� to act by the end of 2015 on a set of five basic principles – for instance, that public data should be open to all “by defaultâ€� rather than only in special cases, and supported by UK initiatives such as the government-funded Open Data Institute and the grassroots Open Knowledge Foundation.

Capture

Improved governance (or public services) and ‘unleashing’ innovation in the economy are the two main themes of both the G8 charter and the Scotland strategy. The fact was not lost on the bureaucrats devising the strategy that public sector organisations have as much to gain as the public and businesses from better availability of government data.

The thorny issue of personal data is not overlooked in the strategy, and a number of important strides have been taken in Scotland by government and (University of Edinburgh) academics recently on both understanding the public’s attitudes, and devising governance strategies for important uses of personal data such as linking patient records with other government records for research.

According to Jane Morgan from the Digital Public Services Division of the Scottish Government, the goal is for citizens to feel ownership of their own data, while opening up “trustworthy uses of data for public benefit.�

Tabitha Stringer, whose title might be properly translated as ‘policy wonk’ for open data, reiterated the three main reason for the government to embrace open data:

  • Transparency, accountability, supporting civic engagement
  • Designing and delivering public services (and increasingly digital services)
  • Basis for nnovation, supporting the economy via growth of products & services

‘Digital first’

The remainder of the day focused on the new EU Public Service Information directive and how it is being ‘transposed’ into UK legislation to be completed this year. In short, the Freedom of Information and other legislation is being built upon to require not just publication schemes but also asset lists with particular titles by government agencies. The effect of which, and the reason for the awareness raising workshop is that every government agency is to become a data publisher, and must learn how to manage their data not just for their own use but for public ‘re-users’. Also, for the first time academic libraries and other ‘cultural organisations’ are to be included in the rules, where there is a ‘public task’ in their mission.

‘Digital first’ refers to the charging rules in which only marginal costs (not full recovery) may be passed on, and where information is digital the marginal cost is expected to be zero, so that the vast majority of data will be made freely available.

keep-calm-and-open-data-11Robin Rice
EDINA and Data Library

 

 

Share