The Thomson Reuters Forum of industry experts, of which I am a member, recently released a report addressing issues impacting scholarly research and attribution related to research data.
The volume of scientific and scholarly research data available is projected to grow by a factor of 44 over the decade from 2010 to 2020, going from 0.8 zetabytes (ZB) to more than 35 ZB (1 ZB = 1 trillion gigabytes). In response to the challenges posed by this vast amount of information, the IP & Science business of Thomson Reuters convened a Forum of industry experts to discuss issues and potential solutions for the scholarly challenges ahead. It published its first output in a report titled “Unlocking the Value of Research Data,” where Forum experts discuss the complexity of the issue and offer recommendations for the future.
Challenges outlined in the paper include:
- Providing uniform access to a broad variety of research outputs, including limitations in making the data available, searchable and retrievable
- The quality of the data and filtering content not yet subject to conventional peer review
- Ways to incent researchers to ensure their works are accounted for and attributable
- Open access and knowing what is to be copyrighted versus what is part of the public domain
- The transformation of publishers from a pay-to-read to a pay-to-publish model
- New forms of research assessment
The report highlights a number of organizations currently working to address these challenges, including figshare, the Research Data Alliance (RDA), the International Council of Science (ICSU) Data Publication Working Group, and Thomson Reuters with its Data Citation Index. The changing scholarly landscape will affect publishers, funders, authors, researchers and other stakeholders.