The Impact Factor: Past its Expiry Date
Until very recently the one way to measure the quality of a scientific article was by pre-publication peer-review and post-publication citation rates.
Citation rates are still commonly used for the assessment of the quality of individual scientists and for the assessment of the quality of individual scientific journals. In the latter case the measuring tool, the impact factor (IF) is thought to represent the chance for high citation rates when publishing your work in High Impact journals. High citation rates for articles are thus taken to mean high quality for the underlying science. In reality the impact factor has been shown to correlate poorly with actual citation rates (http://arxiv.org/abs/1205.4328, http://blogs.lse.ac.uk/impactofsocialsciences/2012/06/08/demise-impact-factor-relationship-citation-1970s/). In fact, it correlates rather well with recorded rejection rates for submitted papers (http://blogs.lse.ac.uk/impactofsocialsciences/2011/12/19/impact-factor-citations-retractions/).
This effectively undermines the assumed relationship between impact factor and the quality of science.
The use of the impact factor also had another side-effect, because it has led to the preservation of a publishing system where authors sustain existing high impact Toll Access journals by publishing their work there, only because these journals are labeled high impact. For these reasons and more (see below) I will argue that the impact factor is long past its (imaginary) expiry date and should urgently be replaced by a new system consisting of a combination of altmetrics and citation rates. To distinguish this system from the old one I would like to suggest a completely new name: the Relevance Index.
Open access publishing has been instrumental in the imminent demise of the Toll-access high impact journals.
Today many high quality open access journals are publishing an increasing number of highly cited and high quality scientific articles.
Although open access has been shown to increase citation rates, we should not make the mistake of wanting to continue using the impact factor.The reason for this is simple: open access opens ways for far better methods for the assessment of scientific quality.
For starters many more people will be able to read scientific articles, and therefore post-publication peer-review can replace the bi’ased pre-publication peer-review system. In addition to actual citation rates, the relevance of the articles in an open access system can be measured by monitoring social media usage, download statistics, quality of accompanying data, external links to articles etc. In contrast with the system measuring a journal impact factor, this system called altmetrics focuses on the article level. The field of altmetrics is under heavy development and has raised much interest during the past few years. So much so, that this years altmetrics12 conference (#altmetrics12) taking place in Chicago this month has attracted a record number of visitors. The conference can be followed by a live stream on Twitter (#altmetrics12, @altmetrics12).
Apart from the fact that open access is enabling the development of better quality assessment tools than the impact and citatation factors, open access in itself leads to better quality science by at least three separate mechanisms:
1)by counter-acting the publication bias in the current publication system, 2) by discouraging selective publishing on the part of the author, 3) by minimizing scientific fraud by the publication of underlying data. Let me explain.
1) Counter-acting the publication bias in the current publication system. The current publication system has evolved in such a way that the more spectacular or unusual the results are, the more the chance is that they will be accepted for publication in leading scientific journals . The same goes for publications confirming these findings. Negative findings tend to be dismissed. In the case of efficacy studies for a new drug two positive studies are sufficient for registration with the FDA while cases are reported where the number of submitted negative studies can be as high as 18 (see: selective publication of anti-depressant trials and its influence on apparent efficacy). This publication bias is a real problem when validating scientific findings. Published results are very often unrepresentative of the true outcome of many similar experiments that were not selected for publication. For example, an empirical evaluation of the 49 most-cited papers on the effectiveness of medical interventions, published in highly visible journals in1990–2004, showed that a quarter of the randomized trials and five of six non-randomized studies had already been contradicted or found to have been exaggerated by 2005 (see: why current publication practices may distort science and references therein). The strategy of publishers to preferentially publish the most exciting stories and stories in support of a new finding is linked to creating a status based on selectivity. This selectivity then is defended with the argument of limited print space. But selectivity is in fact used for something else entirely. In terms of economics it is a way for publishers to turn a commodity (scientific information) of which the value for the future is unsure into a scarce product. This in itself is the well-known commercial process of ‘branding’ where a product with no clear intrinsic value gains value through restricted access and artificial exclusivity. In the case of scientific publications this value then translates into status for the journal and for the scientist publishing in that journal. The most astonishing part of the story however is, that publishers get their product (scientific information) which has been largely produced using public funding, for free, and succeed in selling it back to the public with the aid of commercial ‘ branding’. Seen in this light publication bias is the by-product of commercial branding. Open Access would put an end to these practices. It would give free access to information to the people who already paid for it. At the same time implementation of open access publishing would counteract the publication bias imposed by the publishers and possibly also stakeholders like pharmaceutical companies, because the grand total of papers published in this system would be more representative of the actual work done in the field. For the field of malaria research for example, the effect would be amplified through an increase in the number of relevant publications from researchers in the developing world. All this would lead to better science.
2) Discouraging selective publishing on the part of the author. The post-publication peer-review made possible by open access (discussed in another post click here) would also contribute to better science, because it would provide a control mechanism against selective publishing on the part of the author of a scientific publication.
3) minimizing scientific fraud by the publication of underlying data. An important but often overlooked aspect of scientific publishing is the availability of the original data behind the actual science. For Open Access to really work, access should not be restricted to the mere content of published articles in scientific journals. Access to the raw data behind the articles is equally important, because validation of a publication is not easy without access to the real data. In spite of the fact that 44/50 journals had a policy put in place for the sharing of data, a recent survey in PLoSONE (Public availability of published research data in high-impact journals) concluded that for only 47/500 scientific publications that had appeared in these journals in 2009, research data had been made available online. Implementation of an Open Access publication system inclusive of Full Access to raw research data would offer a further advantage of minimizing the possibilities for scientific fraud, which can be anything from biased presentation to the fabrication of data.
Open Access is the future of scientific publishing, and as this future is near, the impact factor and Toll Access journals will soon become relics of the past.
In my view the impact factor has been flawed from the beginning, and the sooner we make the transition to open access and new forms of metrics, the better; better for science, better for citizens, better for companies, better for businesses, better for countries and better for society as a whole.
While I fully approve of the RI, and think it’s a step in the right direction, it still appears to me too confined to it’s bibliometric routes.
One of the main arguments for Open Access was that society as a whole deserves to be able to access research with which they have contributed through taxes. When discussing RI, it still appears largely confined to the scholarly communication sphere, and doesn’t take into account social or societal impact, which is an altogether different and much larger realm. I think this is what we need to aim for when assessing the true value and impact of research – how do we measure it’s impact on all facets of society? Altmetrics and the Relevance Index, to me, seem like a great step in the right direction over the anachronistic IF, but there’s still a long way to go yet to develop metrics that assess ‘true’ impact.
I think.
Jon, I fully agree with your remarks. A relevance index will not be the ultimate answer, but when you would include a wide cast net of post-pub peer-review this could add to the assessment of a true relevance, the same way that the system for software ranking does. And Yes, an eventual Relevance Index would be a big leap towards a useful metrics, I think
[…] Impact Journals Obsolete? Posted on June 22, 2012 by Michael Tobis • 0 CommentsTom Olijhoek notes that Citation rates are still commonly used for the assessment of the quality of individual […]
[…] in the field with an impact factor of 11.4. Although the dismerits of journal impact factors are well-known and widely decried, I won’t be a hypocrite: since my aim is to succeed in the academic rat […]
[…] the traditional metrics, i.e., the impact factor. Some elements of our industry are trying to break the monopoly that the impact factor has held on metrics in our community so that newer publications might more easily flourish. Perhaps […]
[…] the traditional metrics, i.e., the impact factor. Some elements of our industry are trying to break the monopoly that the impact factor has held on metrics in our community so that newer publications might more easily flourish. Perhaps […]