The next revolution in Science: Open Access will open new ways to measure scientific output
Open Access will not only change the way that science is done, it will also change the way that science is judged. The way that scientific output is measured today centers around citations. Essentially, on an author level this means the number of publications and citations of an author’s articles (author-level metrics). On a journal level, it means the average number of citations that articles published in that journal have received in a given time period (journal-level metrics).
For author-level metrics the Author citation Index has now been replaced by the H-Index that was introduced in 2005 by JE Hirsch. Here the criterion is the number of articles [n] that have received ≥ n citations at a fixed date. In the case of journal level metrics, the Journal Citation Report (JCR) is a databases of all citations in more than 5000 journals—about 15 million citations from 1 million source items per year. From this the journal Impact Factor (JIF) is derived from the number of citations in the current year to items published in the previous 2 years (numerator) and the number of substantive articles and reviews published in the same 2 years (denominator). It effectively represents the average number of citations per year that one can expect to receive by publishing his / her work in a specific journal.
Although the JIF is meant for large numbers of publications, it is also often used in the evaluation of individual scientists. Granting agencies and university committees for instance often substitute the actual citation counts for the number of articles that an author has published in high impact journals. The introduction of the H-Index has diminished the use of the JIF for individual scientists but the practice has yet to disappear. Apart from this the JIF has other flaws. Imagine a journal only publishing reviews. Such a journal would evidently get a high impact factor but clearly the real impact of the published papers for the field will be much less than that from original research papers. An easy way around this problem is offered by the use of the H-Index methodology for journals. This is precisely what Google Scholar metrics does. Because Google has only been offering this possibility since 1st april 2012, it is too early to tell whether this will become a widely accepted method for journal-level metrics.
The H-Index, Google Scholar metrics and the JIF are all rather good indicators of scientific quality. However, in measuring real-world impact they are seriously flawed. Think for a moment of how impact is felt for whatever random topic you can think of. Everyone of us will consider the publication itself, but probably also downloads, pageviews, blogs, comments, Twitter, different kinds of media and social network activity (Google+, Facebook), among other things. In other words, all activities that can be measured by “talking” through social media and other online activities can be used to give a more realistic impression of the real impact of a given research article. Since talking about articles depends on actually being able to read the articles, this is where open access comes into play. The use of the proposed kind of article-level metrics only makes sense when many people are being able to discuss the actual content of published articles, which in turn is only possible when articles are open access. The optimal conditions for using altmetrics would be when articles would all be published as open access, but even with the current growth of open access published papers the method is already starting to make sense.
A number of article-level metrics services are currently in the start-up phase. A company called Altmetric is a small London-based start-up focused on making article level metrics easy. They do this by watching social media sites, newspapers and magazines for any mentions of scholarly articles. The result is an “altmetric” score which is a quantitative measure of the quality and quantity of attention that a scholarly article has received. The altmetric score is also implemented in UtopiaDocs, a PDF reader which links an article to a wealth of other online resources like Crossref (DOI registration agency), Mendeley (scientist network), Dryad (data repository), Scibite (tools for drug discovery), Sherpa (OA policies and copyright database) and more. A disadvantage of UtopiaDocs may be that it focuses on the PDF format instead of an open format. Also the system seems to be rather slow. PLoS also uses article level metrics to qualify articles by giving comprehensive information about the usage and reach of published articles onto the articles themselves, so that the entire academic community can assess their value. Different from the above, PLoS provides a complete score build on a combination of altmetrics, citation analysis, post-publication peer-review, pageviews, downloads and other criteria. Finally, Total-Impact also makes extensive use of the analysis of social media and other online statistics, to provide a tool to measure total impact of a given collection of scientific articles, datasets and other collections. Their focus on collections represents still another approach to the problem of evaluating scientific output.
The previous overview is probably far from complete, so please feel free to add other possibilities in your comments to this post. However, I do think that the description above is an accurate reflection of the fact that the field of bibliometrics is moving fast and that Open Access will provide the key to the development and implementation of better ways to evaluate scientific output. Compared with the current practices, all of which are based on citations only, the inclusion of altmetrics plus online usage statistics and post-publication peer-review in an open access world will represent a true revolution in the way that science is perceived by all, scientists included.