Sharing the results of publicly funded research

The next revolution in Science: Open Access will open new ways to measure scientific output

April 19, 2012 in Uncategorized

Open Access will not only change the way that science is done, it will also change the way that science is judged. The way that scientific output is measured today centers around citations. Essentially, on an author level this means the number of publications and citations of an author’s articles (author-level metrics). On a journal level, it means the average number of citations that articles published in that journal have received in a given time period (journal-level metrics).

For author-level metrics the Author citation Index has now been replaced by the H-Index that was introduced in 2005 by JE Hirsch. Here the criterion is the number of articles [n] that have received ≥ n citations at a fixed date. In the case of journal level metrics, the Journal Citation Report (JCR) is a databases of all citations in more than 5000 journals—about 15 million citations from 1 million source items per year. From this the journal Impact Factor (JIF) is derived from the number of citations in the current year to items published in the previous 2 years (numerator) and the number of substantive articles and reviews published in the same 2 years (denominator).  It effectively represents the average number of citations per year that one can expect to receive by publishing his / her work in a specific journal.

Although the JIF is meant for large numbers of publications, it is also often used in the evaluation of individual scientists. Granting agencies and university committees for instance often substitute the actual citation counts for the number of articles that an author has published in high impact journals. The introduction of the H-Index has diminished the use of the JIF for individual scientists but the practice has yet to disappear. Apart from this the JIF has other flaws. Imagine a journal only publishing reviews. Such a journal would evidently get a high impact factor but clearly the real impact of the published papers for the field will be much less than that from original research papers. An easy way around this problem is offered by the use of the H-Index methodology for journals. This is precisely what Google Scholar metrics does.  Because Google has only been offering this possibility since 1st april 2012, it is too early to tell whether this will become a widely accepted method for journal-level metrics.

The H-Index, Google Scholar metrics and the JIF are all rather good indicators of scientific quality. However, in measuring real-world impact they are seriously flawed. Think for a moment of how impact is felt for whatever random topic you can think of. Everyone of us  will consider the publication itself, but probably also downloads, pageviews, blogs, comments, Twitter, different kinds of media and social network activity (Google+, Facebook), among other things. In other words, all activities that can be measured by “talking” through social media and other online activities can be used to give a more realistic impression of the real impact of a given research article. Since talking about articles depends on actually being able to read the articles, this is where open access comes into play.  The use of the proposed kind of article-level metrics only makes sense when many people are being able to discuss the actual content of published articles, which in turn is only possible when articles are open access. The optimal conditions for using altmetrics would be when articles would all be published as open access, but even with the current growth of open access published papers the method is already starting to make sense.

A number of article-level metrics services are currently in the start-up phase. A company called Altmetric is a small London-based start-up focused on making article level metrics easy. They do this by watching social media sites, newspapers and magazines for any mentions of scholarly articles. The result is an “altmetric” score which is a quantitative measure of the quality and quantity of attention that a scholarly article has received. The altmetric score is also implemented in UtopiaDocs, a PDF reader which links an article to a wealth of other online resources like Crossref (DOI registration agency), Mendeley (scientist network), Dryad (data repository), Scibite (tools for drug discovery), Sherpa (OA policies and copyright database) and  more. A disadvantage of UtopiaDocs may be that it focuses on the PDF format instead of an open format. Also the system seems to be rather slow. PLoS also uses article level metrics to qualify articles by giving comprehensive information about the usage and reach of published articles onto the articles themselves, so that the entire academic community can assess their value. Different from the above, PLoS provides a complete score build on a combination of altmetrics, citation analysis, post-publication peer-review, pageviews, downloads and other criteria. Finally, Total-Impact also makes extensive use of the analysis of social media and other online statistics, to provide a tool to measure total impact of a given collection of scientific articles, datasets and other collections. Their focus on collections represents still another approach to the problem of evaluating scientific output.

The previous overview is probably far from complete, so please feel free to add other possibilities in your comments to this post. However, I do think that the description above is an accurate reflection of the fact that the field of bibliometrics is moving fast and that Open Access will provide the key to the development and implementation of better ways to evaluate scientific output. Compared with the current practices, all of which are based on citations only, the inclusion of altmetrics plus online usage statistics and post-publication peer-review in an open access world will represent a true revolution in the way that science is perceived by all, scientists included.

9 responses to “The next revolution in Science: Open Access will open new ways to measure scientific output”

  1. Jan Velterop says:

    Thank you for mentioning Utopia Documents. Its speed depends on your own bandwidth and the speed with which the services Utopia Documents links to respond. You mention that it may be a disadvantage of UtopiaDocs that it focuses on the PDF format. But that is precisely the point, to focus on PDF, where the deficiency is in terms of web connectivity, while a lot of scientific literature is still ‘ingested’ via PDFs. Links are widely available with HTML versions (though not as widely as one might wish), but bridging the connectivity gap between HTML and PDF is one of Utopia Documents’ main goals. In addition to Altmetrics, Utopia Documents plans to include other metrics as well in a forthcoming release.

  2. Robin P Clarke says:

    This comment is not to praise the above article but to point out some of its serious weaknesses. And yet I guess some quasi-brilliant quality-assessment metric will count this comment as proof of the article’s superiority rather than inferiority.

    The author seems to be entirely lacking the concept that popularity/fashionableness/newsworthiness has only marginal or even non-relationship with real quality. Indeed can well be inversely related. For instance Wegener’s continental drift ignored by professionals for 50 years, Mendel’s genetics experiments ignored for decades, Dr Down’s descriptions of what are now called late and early autism, not even registered by the majority of autism researchers even now, the list goes on. The greatest discoveries have been notoriously COMPLETELY IGNORED (Impact Factor zero, H factor zero) by the “peer” community rather than salivated over with whopping citation impacts.

    The same nonsense with so-called peer review which has been repeatedly shown to be nothing to do with quality but only with conventional wisdom fashionableness and hence again strongly negative against the real breakthrough discoveries.

    There is no substitute for actually reading and evaluating the thing itself with reference to the actual evidence. The real sci lit problem is the publish-perish system which is drowning the genuine science under huge oceans of drivel. For this reason a high proportion of professional science contributes negatively rather than positively, not that any sorts of impact factors could ever tell you so..

  3. Tom Olijhoek says:

    The point of my article was to show that once you and everybody else are at least able to take note of a complete article , this opens up the possibility of better judgment. I have never argued anywhere that this judgment would be based on fashionableness or newsworthiness. You yourself say ” There is no substitute for actually reading ” and that is exactly the point I am also making.
    I agree that he mere citing of scientific articles by peers on the other hand is no garantee for scientific excellency. That is what the impact factor measures, and I would argue that if very many people download an article, tweet about it, comment on it, blog about it , then the article is bound to have significance. Mind you, it is no proof of significance and certainly not proof of a breakthrough. But not many articles are breakthroughs. For me when many collegue scientists and also great number of other interested persons talk and write about a particular paper I would like to take part in that discussion on the basis of having read the open access article. Altmetrics weighs the opinion of other people as a function of their numbers and their activities, not only as a function of the quality of (some) individuals. To me this adds value to the flawed system based on citation assessment

    • Robin P Clarke says:

      Indeed those changes would improve the situation but it looks like you are still confusing the level of attention a documents gets with its level of merit. The problem is that (as Einstein supposedly said!) what you can count doesn’t necessarily count and what really counts you can’t necessarily count. The judgements of what is real quality and importance can only with difficulty be formulated into some operational critieria, if at all. I find the academic world, and the world outside academia too, to be grossy overawed with superficiality, and not least infested with the Matthew effect – whereby a statement receives great publicity and credit if it comes from an already prestigious celeb yet the identical statement gets not even mentioned if some non-entity states it years earlier than the celeb. The whole system of trying to assign bulk authority labels of merit ultimately backfires, as per the Lysenkoist catastrophe in which the genuine experts had the status of slaves in Siberian mines while charlatans held all the professorships. There’s plenty evidence that the ghost of Lysenko is still alive and prospering in the Western “free” world.

      I would like to see a new system in which the crude measures underlying publish-or-perish are binned (as a random system couldnt be worse) and people just publish everything they think worthwhile directly to the web and then things get a reputation directly from how people react to them. At the moment the key problems appear to be (1) peer review which in medicine at least is well-known to just bias in favour of establishment drivel or mutual backscratching cliques, and (2) pubmed inclusion (and google status) which again bias against anything new and deter people from just putting on web (because of need to be prominently indexed) – and researchers be negatively-treated for their drivel publications such that there is a much lower volume of output and a concentration on only publishing genuinely worthwhile material. Of course all this may be different outside the medical field.

  4. Gunther Eysenbach says:

    A few important omissions to note:

    1) JMIR has been collecting and measuring social media resonance on Twitter for over 3 years now – we call this the Twimpact Factor (see “Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact” http://www.jmir.org/2011/4/e123/). The TWIF and related metrics (TWINDEX) are displayed here: http://www.jmir.org/stats/mostTweeted

    2) altmetrics is really just a fancy term for infometrics, webometrics, social media metrics, infodemiology metrics etc. If you google these terms, you will note that these ideas are not as new as you might think

    3) WebCite (http://www.webcitation.org) was also originally set up not only to archive cited work on webpages, but also to collect metrics for impact based on citations/mentionings of webpages (open access articles?). It also creates a “publishing a-la-carte” system (Priem), where researchers can first publish something on the Internet, archive it (or have it archived by throw parties who want to cite it – i.e. create a WebCite snapshot), assign a DOI, have it peer-reviewed, and then view alternative metrics. Perhaps now is the time to fund disruptive systems like this which turn the traditional publishing system on its head?

    • Gunther Eysenbach says:

      Typo up there: “third parties” instead of “throw parties” (damn spellchecker)

    • Robin P Clarke says:

      Gunter, the WebCite website you mention (haha it sounds same as website in english!) looks a great thing but I don’t follow your sequence of “first publish …, archive it, assign a DOI… So far so good but then how does it get peer-reviewed if it isn’t sent to a journal?

  5. […] この Altmetrics、オープンアクセス分野はもとより、科学界全般で俄然注目を集めている様な気がしてなりません。(これとかこれとかこれとかこれとかこれとか) […]

  6. Luke Angel says:

    This was very helpful information from your side and i like to tell you that please keep posting such an interesting posts like that. I am really waiting for your future posts.
    Thanks and regards,
    Pharmaexpressrx

Leave a Reply

Your email address will not be published. Required fields are marked *