Sharing the results of publicly funded research

You are browsing the archive for Tom Olijhoek.

The cost of scientific publishing: update and call for action

- May 16, 2014 in Call to Action

Opening knowledge is great. Sharing knowledge is vital. In the past, publishers were the sole mediators for the dissemination of knowledge in printed form. In our, digital age, sharing has become easier thanks to the internet. Yet, although all areas of society have embraced the internet as THE sharing medium, scientific publishing has lagged far behind. Nowhere else are the advantages of sharing knowledge so obvious as in science: but instead of facilitating sharing, scientific publishers desperately try to protect their grip on the access to knowledge. Open access publication systems are a real threat to their lucrative business. That is why finding arguments against open access have become of vital importance. Similarly, countering these arguments is of vital importance for all of us who need [open] access.

The_Cost_of_Knowledge_logo Where are we now…

Tim Gowers’ recently published   blog post on his quest for information on subscription prices for Elsevier journals, using direct approaches (calling, writing publishers and libraries)  and indirect approaches (demanding information based on the FOI),  has caused a major stir. I recommend reading Michelle Brook’s very good overview in her recent blog. Some really astonishing facts have already come to light. It has for instance become apparent that different institutions pay very different prices for almost the ‘same’ deals. Also, universities that did not want to give the information responded often using the argument of commercial interests that had to be protected (full details for all cases in Tim Gowers blog post)

The ‘Big deals’ that ELSEVIER and other scientific publishers have made with libraries, form their insurance for long term profits. Making these deals subject to confidentiality actively prevents decrease of prices for publishing which would otherwise occur through market mechanisms. And better still from the publisher’s viewpoint, so-called hybrid journals that allow open-access publishing at a price, only add to the profits. Although publishers say that prices for subscriptions will be lowered when the proportion of paid for open access articles increase, there is as yet no sign of this. APC charges for open access articles in hybrid journals are on the average $900 more expensive than for full open access journals (as Dave Robberts reported on Google+).

A discussion thread on the above topics was started at the open access list of the OKFN. The following is an attempt to summarize this discussion here and at the same time asking everyone to participate in this project by adding data on subscription prices of his / her country to the growing dataset.

Charging for educational re-use…

One of the new items that came to light was the fact that Elsevier and other publishers receive extra fees that allow articles to be re-used for education. One would think that this situation would be covered by fair use policies but this is not the case. Esther Hoorn of Groningen University, NL provided the following link for an overview of worldwide rules on fair use and limitations for educational use: Study for Copyright Limitations and Exceptions for Educational Activities in North America, Europe, Caucasus, Central Asia and Israel (by Raquel Xalabarder, Professor of Law, Universitat Oberta de Catalunya (UOC), Barcelona, Spain).  For instance in the Netherlands a fair renumeration for being able to use publisher copyrighted material for educational use has to be paid. And not very surprisingly this is done in the form of a nationwide big deal with the publishers that has to be negotiated every few years. According to Jeroen bosman of Utrecht University (NL) educational use has been granted by some publishers or is sometimes included in big deal contracts for subscriptions.  The situation seems to be somewhat similar for other countries. According to Stuart Lawson In the UK it’s the Copyright Licensing Agency (CLA) that higher education institutions pay for a license to copy/re-use work for educational use. In 2013 the CLA had an income of £7,872,449 (https://www.cla.co.uk/data/pdfs/accounts/cla_statutory_accounts_year_to_2013.pdf, p.1). Not all of that was from educational institutions, but a lot of it was. Another few million pounds that is only spend because the original work wasn’t openly licensed….

According to Fabiana Kubke of the University Auckland, New Zealand does not have fair use either – it has fair dealing. There is a substantial amount that goes into licencing [] for education at the University of Auckland and those licences do not come out of the libraries budget. (which is why they don’t show in the libraries’ expenditures). This makes it all the more difficult to find out about the costs for this type of licensing. One would need to find out for different institutions what office is in charge of negotiating and paying for Copyright Clearing house costs..

From the examples that were discussed it seems that ‘fair use’ for education almost always has to be paid for. Only in the US and Canada fair use usually allows for 10% reproduction of copyrighted material for “fair use”. This situation of having to pay for re-use of publisher copyrighted material will probably not change very soon. This is at least the conclusion that can be drawn from the recent decision by the EU to block further discussion of WIPO’s (World Intellectual Property Organisation) case for harmonising international legislation on text/data-mining and other copyright issues.

Fully implemented open access would be an order of magnitude cheaper…

Three items were heavily discussed on the open access mailing list, apart from the cost of toll access publishing (what customers had to pay), the cost of open access publishing and open access to data which is necessary for text and datamining. Concerning the cost of publishing, It would be very useful to have data on the total expenditures for publishing per country. For the total cost data are available. According to Bjoern Brembs : “Data from the consulting firm Outsell in Burlingame, California, suggest that the science-publishing industry generated $9.4 billion in revenue in 2011 and published around 1.8 million English-language articles — an average revenue per article of roughly $5,000.” For the cost of open access publishing,  Outsell estimates that the average per-article charge for open-access publishers in 2011 was $660.

Doing some quick calculations on the income of Elsevier for access to its digital content, Ross Mounce initially concluded that the cost per article was on the average  $2800 USD over the subscription lifetime per article (70 years) . This was assuming  a mere $ 0,5 Billion income per year for Elsevier on digital access to its content. The more realistic figure here probably is $1 Billion income,  so the cost per article for a representative toll access business would be $5,600 in line with what Outsell estimated. The cost of publishing with one representative open access publisher (PloS) is at the moment around $1,350 per article.

This means that if we could switch to full open access for all articles immediately, we would save about 76% on publishing costs!

publisher profits

 

 

Profit of scholarly publishing vs. other industries pic.twitter.com/L3U6GWhM  via @ceptional

 

 

Regarding the profit margins of toll access publishers, Stuart Lawson provided a link to a spreadsheet where he compared revenues, profits, and profit margins of academic publishers for the last two years( Link). There is nothing wrong with making profit as such. The scientific publisher profit margins  are an issue because of their scale which would be impossible in a truly open market, where open access publishing would have a competitive advantage. However, the toll access publisher’s business of secret deals and long term contracts work very effectively against a transition to open access business models. The more data we can aggregate on publishing costs the more chance there is that open access can profit from its competitive advantage. Data for several individual countries, notably for Brazil, Germany, UK, Netherlands, USA, France and many more are already coming in.

What about open data…

Open data is another issue. As Peter Murray Rust wrote in a mail in the discussion thread of the open access mailing list :” the real danger [of] publishers moving into data where they will do far worse damage in controlling us http://blogs.ch.cam.ac.uk/pmr/2014/04/29/is-elsevier-going-to-take-control-of-us-and-our-data-the-vice-chancellor-of-cambridge-thinks-so-and-im-terrified/ “ and “Yes – keep fighting on prices. But prices are not the primary issue – it’s control.”. But scientific publishers make huge profits and money can buy a lot of influence. There is a growing awareness that data are the ‘gold’ of the 21st century. It is of vital importance for science and society that scientific data will not be controlled by a few big publishers who want to make as much profit as possible. Open data would be the best way to prevent this from happening. The signs in the US are positive in this respect, with Obama’s directive for open government data. The outlook in Europe is less good in view of the previously mentioned decision by the EU to block further discussion on harmonizing international legislation on text and data-mining.

What we can do….

Together we can uncover the real cost of scientific publishing. Some of you may know where to find figures for your country or you may be able to ask for information using Freedom Of Information legislation. You can add these data directly into a spreadsheet on GoogleDocs or report it on the WIKI  . The aggregated data will allow us to lobby more effectively for the promotion of open knowledge. We will keep you apprised of developments through the open access mailing list and the blogs on this website.  You can subscribe to the open access mailing list and/or other lists of OKFN at LISTINFO.

Recap of the Berlin 11 conference: the call for a change in scientific culture becomes stronger

- January 22, 2014 in Events

The Berlin11 meeting which took place in Nov 2013 was a very energetic, motivating and inspiring event, and it can only be hoped that especially the newly introduced satellite meeting for young scientists will take place again next year. The presentations of this conference will soon be online (berlin11.org).

I would like to use this blog to highlight some of the presentations and events of Berlin11 in relation to the developments of 2013. First of all the Berlin11 conference was the first ever that hosted a satellite conference aimed at young scientists. This reflects the increasing support for open access with the young generation of scientists. One of the highlights here was the presentation of Jack Andraka, a 16 year old scholar who told his story on the role of open access for him and his development of a innovative test for pancreatic cancer.

Another highlight of the Satellite Conference was the launch of the OpenAccess Button which collects instances where people cannot access information because of toll barriers AND even suggests ways on how to find the information elsewhere in e.g. open access repositories (link here).

At the main conference Mike Taylor, a scientist and open access advocate from the UK, gave a compelling talk on why the open access movement needs idealists and why the (also economic) benefits of open access far outweigh the cost (slideshare via this link). Bernard Rentier, Rector of the University of Liege, told of his solution for getting scientists to submit their articles to the university repository: only those articles deposited count for tenure and promotion. Robert Darnton, professor at Harvard University, had a fascinating presentation on the building of the Digital Public Library of America which gives access to over 5,5 million items from libraries, archives and museums.

DPLA Berlin11 slide small

Slide of Robert Darnton’s presentation at Berlin11.

 

Marin Dacos from OpenEdition, France introduced OpenEdition the major European portal for digital publications, including books, in human and social sciences. Glyn Moody, science journalist from the UK, gave a captivating talk called ‘Half a Revolution’  on the history of the internet, open source software and open access publications. He made a very strong plea for open access without embargo calling it zero embargo now: ZEN. The slides of his talk are available here. David Willetts, Minister from the UK, explained the UK government’s policy for Open Access and as a side-line announced the imminent launch of an Science WIKI platform.

Ulrich Pöschl of the university of Mainz, Germany, elaborated on the need for other systems of peer-review and public discussions on published articles. The standard peer-review system has become so flawed that we urgently need to find ways to replace this scientific quality measure with new methods of quality assessment. He gave a talk at Berlin11 very similar to the one he gave at the ALPSP seminar on the future of peer-review in London 1 week earlier. He puts his ideas on interactive open access publishing to practice in the journal Atmospheric Chemistry and Physics. According to Pöschl a new multi-stage peer-review system in combination with Open Access will lead to improved scientific quality assurance. The process of interactive open access publishing  has 2-stages: rapid publication of a discussion paper, public peer review and interactive discussion,  2) review completion and publication of the Final Paper.  The use of a discussion paper guarantees rapid publication, open access enables public peer-review and discussion with a large number of participants, and the number of individual reviews possible in this system is a build-in quality assurance for the final paper. Interestingly, quality assessment using this scheme can also incorporate altmetrics and other measures of impact assessment.

Berlin11 Ploeschl slide small

Slide of Ulrich Pöschl ‘s presentation at Berlin11

Two main topics during the conference were the call for immediate unrestricted access,  Zero Embargo Now (Glyn Moody) and the call for a new scientific quality assessment system (many speakers). The current quality assessment systen focuses on where you publish (rewarding publication in high impact journals) and on the number of publications (the ‘publish or perish’ ). A general feeling on what such a new assessment should look like is summarized in a key message from the presentation that Cameron Neylon gave : “it should not be that important where and how much you publish but rather what and what quality you publish”.

I would add to this that in a sense in DOES matter where you publish, namely as far as you publish your work open access allowing for unrestricted re-use. One of the main problems here is how to create the right incentives for scientists to publish open access. We have seen one possible solution in the idea given by Bernard Rentier (see above). In addition a completely refurbished assessment system could provide the right incentives when this system takes into account access, re-use, (social) impact and quality of publications. The current scientific culture with its emphasis on quantity and status is slowly but surely undermining the quality of science, because quantity is fundamentally different from quality. In their need to publish as much as possible as quickly as possible, scientists often duplicate results and/or publish results prematurely. Extreme cases of fabricated  results are also seen and it is especially when these cases are found out that the reputation of science suffers. The conventional peer-review system used for quality assessment has proven to be insufficiently robust to prevent this, and new forms of peer-review and other methods are being developed to replace it (see for instance  F1000).

The prioritization of publishing and doing research has had  a very negative impact on education and the quality of research.  It was for this reason that Science in Transition was founded in October 2013 in Amsterdam. The initiators felt that “science does not work as it should” and that something ought to be done about it. The central message again is that the pressure to publish is detrimental to education, and that the quality of science and its reputation is compromised by the current systems and judging scientific output by the number of publications. The full text of the position paper can be read here.

In a TV interview on Dec 30, 2013 Robbert Dijkgraaf, president of the Princeton Institute for Advanced Study put it this way:

“we have to judge science more by its value than by the number [of scientific output]”.

That there is growing awareness of the need for change in the area of scientific quality assessment is also illustrated by these citations taken from the New Year’s speech of the rector of the University of Amsterdam, prof. van den Boom: “More publications is not always better”,  and “we have to think hard on alternatives for the evaluation of research”. ( full speech can be seen here). In the two months that have passed since Berlin11, Open Access and the Quality of Science have received considerable public media attention. The German magazine “Die Zeit’’ for example has published a set of articles in its first issue of 2014 on ‘how to save science’.  A new case of (self) plagiarism has led to heated discussions in Dutch newspapers on the uncertain quality of science and the possibilities of fraud.  The president of the organization of Dutch universities, Karl Dittrich has announced that starting Jan 2015 more emphasis will be put on quality, away from quantity of scientific publications in the evaluation of scientists and universities.

The Dutch state secretary for research and education Sander Dekker previously already had stated that scientific results should be published in open access journals as of 2016, even by law if scientists will not abide out of free will. Also on the issue of Open Access, the US congress in Jan 2014 approved the so-called 2014 omnibus appropriations legislation. This policy shall require public, free access to each paper based on researches even partially funded by a federal agency, submitted to any peer-review journal, no later than 12 months after the publication. Although this move towards open access is great news, the drawback here is that the bill only deals with free access, the issue of re=use and copyright is left in the open. Also, the 12 month embargo period is not in line with open access in the sense of the Berlin declaration. So it is a step forward but there still is a long way to go, as Mike Taylor says it in his latest blog post.

Let me conclude this blog by a small prediction: that 2014 will be the year that scientific output will be judged less and less by how much and where one has published, and more and more by what and in what (accessible and re-usable) form one publishes his/ hers results.

Open Access: a remedy against bad science

- December 4, 2012 in Uncategorized

Who has never been in the situation that he had a set of data where some of them just didn’t seem to fit. A simple adjusting of the numbers or omitting of strange ones could solve the problem. Or so you would think. I certainly have been in such a situation more than once, and looking back, I am glad that I left the data unchanged. At least in one occasion my “petty” preformed theory proved to be wrong and the ‘strange data’ I had found were corresponding very well with another concept that I hadn’t thought of at the time.
There has been a lot of attention in the media recently for cases of scientific fraud. Pharmaceutical companies are under fire for scientific misconduct (Tamiflu story), and in the Netherlands the proven cases of fraud by Stapel (social psychology), Smeesters (psychology) and Poldermans (medicine/cardiology) have resulted in official investigations into the details of malpractice by these scientists. All this has led to a sharp decline in the trust that people used to have in science (Flawed science:The fraudulent research practices of socialpsychologist Diederik Stapel: report of the committee LND (Levelt,Noort, Drenth)). A report with recommendations for preventing scientific fraud, called “sharpening policy after Stapel”  was published by four Dutch social psychologists:  Paul van Lange (Amsterdam), Bram Buunk (Groningen), Naomi Ellemers (Leiden) and Daniel Wigboldus (Nijmegen). One of the report’s main recommendations is to share raw data and have them permanently stored safely and accessible for everyone.  It will be clear that the issue of scientific misconduct is by no means restricted to the Netherlands, nor to specific fields of research. Other countries have similar stories for other scientific fields. For example a committee like the committee LND mentioned above , recently presented the outcome of an investigation into the scientific publications of Eric Smart from the University of Kentucky in the field of food science. And then there is the essay by John Ioannidis “Why most published research findings are false”, where he gives 6 reasons for the bad quality of (medical) science……

In this article I propose that for almost all of the instances where scientific misconduct was found, open access to articles AND raw data would have either prevented the fraud altogether, or at the very least would have caused them to be exposed much more rapidly than has been the case in the current situation. Especially in the field of medical research such a change can literally change lives.
To illustrate this point I want to make a distinction between different forms of ‘Bad Science’. On the author side we can have selective publishing (omitting data that do not fit one’s theory), non-reproducibility, data manipulation and at the far end of the spectrum even data fabrication. On the side of publishers we have publication bias (preferential publishing of positive results or data that confirm an existing theory), fake peer review and reviewers or editors pushing authors to make a clear story by omitting data (effectively resulting in selective publishing!).
PUBLICATION BIAS. The strategy of publishers to preferentially publish the most exciting stories and stories in support of a new finding (publication bias) contributes to selective publishing and sloppy science. Under much pressure to publish their (exciting) results researchers take less care than would be advisable when they submit their research to highly ranked journals. No small wonder that so-called high impact journals also show very high retraction rates of manuscripts.  Publication bias is also a real problem when validating scientific findings. Published results  are often unrepresentative of the true outcome of many similar experiments that were not selected for publication. For example,  an empirical evaluation of the 49 most-cited papers on the effectiveness of medical interventions, published in highly visible journals in1990–2004, showed that a quarter of the randomised trials and five of six non-randomised studies had already been contradicted or found to have been exaggerated by 2005 (see: why current publication practices may distort science). At the same time negative findings tend to be dismissed. In the case of efficacy studies for a new drug two positive studies are sufficient for registration with the FDA while cases are reported where the number of submitted negative studies can be as high as 18 (see:  selective publication of anti-depressant trials and its influence on apparent efficacy). I don’t think that I have to spell out the consequences that this has for medical health.
QUALITY CONTROL. In cases where scientists commit fraud, the main control mechanism against this in the current situation consists of peer-review and comments from colleagues who have read the article(s). This control sometimes suffices, but in many cases peer-reviewers don’t have or don’t take the time to look at the actual content of an article in detail, let alone at the raw data. Often these data are not even available anyway, or inexplicably got lost somehow.  Another complication is that because of the enormous growth in number of journals and total scientific output it has become increasingly difficult to do proper quality checks of all the articles in the form of peer-review. And the kind of rigorous study into malpractices like the one done by the committee LND for the case of social psychologist D. Stapel, shows how much time (1 ½ years) it can take to check on just one scientist. This underpins the notion that it will be impossible to check on all suspicious articles in this way.The solution in my view can be found in open access publishing. Making information available for virtually everybody automatically entails a control mechanism for scientific quality, by something like ‘crowd-sourced peer-review’. To state it more simply: the more people there are who can take a look at complete data, the more likely it is that inconsistencies will be quickly spotted.
THE CASE FOR OPEN ACCESS. When articles and data are published open access, this fact alone discourages scientific misconduct. The availability of the complete article, including the raw data, to a very large audience has this kind of effect. One can be sure that if there is something wrong with the article, there will be someone out there who will spot this. The same mechanism is responsible for a major advantage of open access: the fact that scientific information that has been made available using  open access will reach such a large audience that there will always be someone out there who can and will improve on the ideas described in a publication. At the recent Berlin10 conference in Stellenbosch this so-called “proximity paradox” has been brilliantly explained by Elliot Maxwell. He described the effect with the single sentence: ““With enough eyeballs, all bugs are shallow”, meaning that with enough dissemination any problem can be solved. Tim Gowers, a fervent proponent of open access has exploited this in his now famous Polymath Project: sharing a very difficult mathematical problem with as many people as possible solved the problem in a fraction of the time than would have been possible doing it any other way. The company Innocentive.com exploits this effect by broadcasting a problem that has to be solved and offering a reward of a fixed amount of money for anyone offering the solution.  In this manner the  “wisdom of the crowd” offers a way to keep science and scientists on track, while at the same time it stimulates a new way of doing science: by speeding it up, promoting the pursuit of new research, increasing  innovation potential by contributions from unforeseen sources and accelerating the translation from actual discovery to practical solutions (E. Maxwell). And the crowd can only be wise with Open Access to information. Another effect of open access on the quality of science is that it effectively reduces duplicative and dead-end scientific projects. And last but not least open access facilitates the accountability for research funding and it facilitates focusing on real priorities.
THE ROLE OF OPEN ACCESS JOURNALS. While peer-review remains indispensable for publishing good science, open access enables other forms of peer-review than the ones that are traditionally in use. Open access articles can be peer-reviewed by many more people. Post-publication peer-reviewing will certainly prove to be an effective control mechanism for the quality of scientific articles, and for the detection of scientific misconduct. But also pre-publication peer-review can be improved. BiomedCentral recently started a new system of peer review Peerage of Science, which works with  a “pool” of possible peer-reviewers much larger than the small number of reviewers that other journals usually have on call. This will speed up the process of peer-review, and having less of a burden on single reviewers will probably also improve the quality of the peer-review process itself. Another very important point concerning the prevention of fraud is the open access publication of the underlying raw data together with the article. A number of initiatives exist in this area. Figshare and Dryad facilitate the storage and linking of raw research data and journals are slowly starting to move towards the publication of raw data linked to the article. The National Science Foundation and other funders are now accepting data as first class research output. However, we still have a long way to go. In spite of the fact that 44/50 journals had a policy put in place for the sharing of data, a survey in PLoSONE  (Public availability of published research data in high-impact journals) concluded that for only 47/500 scientific publications that had appeared in these journals in 2009, research data had been made available online.  Despite all the efforts described above this situation has probably not changed substantially.
CONCLUSION. Implementation of open access inclusive of full access to raw research data would minimize the possibilities for scientific fraud, which can be anything from biased presentation to the fabrication of data or the dismissal of negative results.  It would most certainly change the way that science is done. Having the data available, open access, for the whole world to see and check on, will provide a very strong incentive for scientists to publish good science. In my view it will prove to be very difficult indeed to present faulty data in an open access / open science system, and actually get away with it.

The Impact Factor: Past its Expiry Date

- June 22, 2012 in Uncategorized

Until very recently the one way to measure the quality of a scientific article was by pre-publication peer-review and post-publication citation rates.

Citation rates are still commonly used for the assessment of the quality of individual scientists and for the assessment of the quality of individual scientific journals. In the latter case the measuring tool, the impact factor (IF) is thought to represent the chance for high citation rates when publishing your work in High Impact journals. High citation rates for articles are thus taken to mean high quality for the underlying science. In reality the impact factor has been shown to correlate poorly with actual citation rates (http://arxiv.org/abs/1205.4328, http://blogs.lse.ac.uk/impactofsocialsciences/2012/06/08/demise-impact-factor-relationship-citation-1970s/). In fact, it correlates rather well with recorded rejection rates for submitted papers (http://blogs.lse.ac.uk/impactofsocialsciences/2011/12/19/impact-factor-citations-retractions/).

This effectively undermines the assumed relationship between impact factor and the quality of science.

The use of the impact factor also had another side-effect, because it has led to the preservation of a publishing system where authors sustain existing high impact Toll Access journals by publishing their work there, only because these journals are labeled high impact. For these reasons and more (see below) I will argue that the impact factor is long past its (imaginary) expiry date and should urgently be replaced by a new system consisting of a combination of altmetrics and citation rates. To distinguish this system from the old one I would like to suggest a completely new name: the Relevance Index.

Open access publishing has been instrumental in the imminent demise of the Toll-access high impact journals.

Today many high quality open access journals are publishing an increasing number of highly cited and high quality scientific articles.

Although open access has been shown to increase citation rates, we should not make the mistake of wanting to continue using the impact factor.The reason for this is simple: open access opens ways for far better methods for the assessment of scientific quality.

For starters many more people will be able to read scientific articles, and therefore post-publication peer-review can replace the bi’ased pre-publication peer-review system. In addition to actual citation rates, the relevance of the articles in an open access system can be measured by monitoring social media usage, download statistics, quality of accompanying data, external links to articles etc. In contrast with the system measuring a journal impact factor, this system called altmetrics focuses on the article level. The field of altmetrics is under heavy development and has raised much interest during the past few years. So much so, that this years altmetrics12 conference (#altmetrics12) taking place in Chicago this month has attracted a record number of visitors. The conference can be followed by a live stream on Twitter (#altmetrics12, @altmetrics12).

Apart from the fact that open access is enabling the development of better quality assessment tools than the  impact and citatation factors, open access in itself leads to better quality science by at least three separate mechanisms:

1)by counter-acting the publication bias in the current publication system, 2) by discouraging selective publishing on the part of the author, 3) by minimizing scientific fraud by the publication of underlying data. Let me explain.

1) Counter-acting the publication bias in the current publication system. The current publication system has evolved in such a way that the more spectacular or unusual the results are, the more the chance is that they will be accepted for publication in leading scientific journals . The same goes for publications confirming these findings. Negative findings tend to be dismissed. In the case of efficacy studies for a new drug two positive studies are sufficient for registration with the FDA while cases are reported where the number of submitted negative studies can be as high as 18 (see: selective publication of anti-depressant trials and its influence on apparent efficacy). This publication bias is a real problem when validating scientific findings. Published results are very often unrepresentative of the true outcome of many similar experiments that were not selected for publication. For example, an empirical evaluation of the 49 most-cited papers on the effectiveness of medical interventions, published in highly visible journals in1990–2004, showed that a quarter of the randomized trials and five of six non-randomized studies had already been contradicted or found to have been exaggerated by 2005 (see: why current publication practices may distort science and references therein). The strategy of publishers to preferentially publish the most exciting stories and stories in support of a new finding is linked to creating a status based on selectivity. This selectivity then is defended with the argument of limited print space. But selectivity is in fact used for something else entirely. In terms of economics it is a way for publishers to turn a commodity (scientific information) of which the value for the future is unsure into a scarce product. This in itself is the well-known commercial process of ‘branding’ where a product with no clear intrinsic value gains value through restricted access and artificial exclusivity. In the case of scientific publications this value then translates into status for the journal and for the scientist publishing in that journal. The most astonishing part of the story however is, that publishers get their product (scientific information) which has been largely produced using public funding, for free, and succeed in selling it back to the public with the aid of commercial ‘ branding’. Seen in this light publication bias is the by-product of commercial branding. Open Access would put an end to these practices. It would give free access to information to the people who already paid for it. At the same time implementation of open access publishing would counteract the publication bias imposed by the publishers and possibly also stakeholders like pharmaceutical companies, because the grand total of papers published in this system would be more representative of the actual work done in the field. For the field of malaria research for example, the effect would be amplified through an increase in the number of relevant publications from researchers in the developing world. All this would lead to better science.

2) Discouraging selective publishing on the part of the author. The post-publication peer-review made possible by open access (discussed in another post click here) would also contribute to better science, because it would provide a control mechanism against selective publishing on the part of the author of a scientific publication.

3) minimizing scientific fraud by the publication of underlying data. An important but often overlooked aspect of scientific publishing is the availability of the original data behind the actual science. For Open Access to really work, access should not be restricted to the mere content of published articles in scientific journals. Access to the raw data behind the articles is equally important, because validation of a publication is not easy without access to the real data. In spite of the fact that 44/50 journals had a policy put in place for the sharing of data, a recent survey in PLoSONE (Public availability of published research data in high-impact journals) concluded that for only 47/500 scientific publications that had appeared in these journals in 2009, research data had been made available online. Implementation of an Open Access publication system inclusive of Full Access to raw research data would offer a further advantage of minimizing the possibilities for scientific fraud, which can be anything from biased presentation to the fabrication of data.

Open Access is the future of scientific publishing, and as this future is near, the impact factor and Toll Access journals will soon become relics of the past.

In my view the impact factor has been flawed from the beginning, and the sooner we make the transition to open access and new forms of metrics, the better; better for science, better for citizens, better for companies, better for businesses,  better for countries and better for society as a whole.

 

Open Access: Not just a matter for scientists

- May 17, 2012 in Uncategorized

Eric Johnson is an engineering professional working as a patent facilitator for a multinational company. One of his jobs is to find information and “connect the dots” related to intellectual property of competitors, to develop research strategies for his company. He is also a multiple occurrence Testicular Cancer survivor who used the medical literature to research his condition and inform his treatment.He says: ”I do not believe I would be alive today if it were not for the information that can only be accessed by the layman (patient) in online sources”.

 

This is just one story of many on the website WhoneedsAccess, where scientists and non-scientists speak out about their need for access to information. Information that is often inaccessible without expensive subscriptions to scientific journals or payment of € 20-30 per publication. The website is the initiative of Mike Taylor, a scientist and open access advocate, and member of the @ccess Initiative, a group dedicated to the promotion of open access to scientific publications and data for everyone, scientists and non-scientists.

The basis for the requirement of open access to information is formed by the following 3 principles:

  1.  Access to information is a fundamental right (similar to the right to clean air, clean water, medical care, education)
  2. The accumulated knowledge of mankind is owned by everyone and cannot and should not be claimed or shielded from access by individuals, organizations, firms or governments.
  3.  Knowledge by itself has no intrinsic value, it only derives a value from being shared with as many people as possible.

The dissemination of knowledge on a large scale only became possible through the distribution of books and journals by publishers among a growing group of (high) educated people. Before the introduction of the Internet (in the 90s of the last century), publishers had built up a monopoly on the production and distribution of knowledge through printed scientific journals and books. The increasing costs of subscriptions to scientific journals were justified by the publishing companies by pointing to increasing production and distribution costs. Scientists and research institutions had no choice but to pay. After the introduction of the internet costs fell significantly and modern digital reproduction and distribution have made these costs nowadays almost negligible. The publishers, however, have continued to increase their prices and to shield most publications from free access on the internet. Because of this, scientists, institutes and other knowledge seekers continue to  pay large sums to publishers for a now basically redundant service.

The reason for this is emotional rather than scientific. Major scientific publishers were able to maintain a monopoly on the dissemination of scientific knowledge, because a growing number of authors with a growing number of publications have felt the compelling need to be published in a very limited number of so-called High Impact Factor journals. These journals are renowned and sought after, because renowned scientists voluntarily continue to publish there. And these journals are largely owned by large multinational publishing companies.

So what is in fact happening is, that scientists are publishing in these journals, because they THINK they HAVE TO, because OTHERS DO SO, and also because scientific committees continue to JUDGE SCIENTISTS by their NUMBER OF PUBLICATIONS IN HIGH_IMPACT JOURNALS-which ARE high impact BECAUSE scientists CONTINUE to publish their best work there. The result is a vicious circle that seems hard to break.

In this way publishers have succeeded in creating near ideal market conditions for themselves: a product that is delivered for free (by scientists), a quality assurance system that is delivered for free (peer-review by scientists), and an absurdly high price for access to information that is determined entirely by the same publishers. For one thing, it is fully unjustified that after publication one has to pay again to get access to the results of the research, as much of research has been already paid for with public funds.

How profitable publishing Science can be, is illustrated by the following figures: in 2011 Elsevier asked $ 7089 for a subscription to Theoretical Computer Sciences (source: American Mathematical Society). That same year Elsevier also made a profit of £768 million on a turnover of £ 2.1 billion, a margin of 37.3%. 78% of those sales came from selling subscriptions to scientific journals. Compare this with a margin of 24% in 2011 for Apple, the highest profit ever in the history of this company. Another example: during the last 6 years, average prices for access to online content from 2 large scientific publishers have increased by an incredible 145%.

For a long time it seemed that the publishers could continue this highly lucrative business without too much trouble. That is …… until 21 January this year when Tim Gowers, Professor of Mathematics at Cambridge University made an appeal to his colleagues to boycott Elsevier, one of the largest scientific publishers. That call was so successful, that the list currently counts over 11,000 signatures. More and more people seem to finally realize that something can be done against the extremely high cost of subscriptions to scientific journals and the inaccessibility of scientific information, namely NOT PUBLISH [in these journals] and a MANDATORY REQUIREMENT FOR OPEN ACCESS.

The call from Tim Gowers has launched what the English press is already calling the Academic Spring. For example, Harvard, one of the richest universities in the world with a total budget of $ 31.7 billion, decided to cancel “too expensive” journal subscriptions because they no longer could afford them, at the same time asking her professors to publish more or less mandatory in open access journals in order to “help increase the reputation of these scientific journals”. In England, the Minister of Science David Willetts announced at a conference with the United Kingdom Publishers Association, that all publicly funded research should be published as open access. The government has called on Jimmy Wales, one of the founders of Wikipedia for help in this process. The Wellcome Trust already had issued an announcement to that same effect for the research that it is funding. The World Bank announced last month, that all existing and new publications, reports and documents will be open access by july 2012. And Neelie Kroes, of the EU Digital Agenda said on May 4, 2012 in a speech in Berlin at the Re: publica conference on the topic of ‘Internet Freedom’, that “entire industries that were once based on monitoring and blocking could now be rebuilt on the basis of customer friendliness , sharing and interaction. “A clearer reference to the scientific publishers can hardly be imagined. The EU now has issued a directive whereby the all research funded by a total budget of € 80 billion must all be published open access from 2014 onwards. And very recently the Access2Research initiative has launched a campaign for open access through a petition to the White House.  The action has yielded over 10,000 signatures in slightly more than 2 days and will probably reach the required 25,000 signatures long before the deadline of june 19, 2012.

 

In the Netherlands, NWO (Dutch Research Organization) has , for a number of years now, been engaged in promoting open access to scientific publications. Last year, a funding of € 5 million has been made available for adapting existing, or creating new open access journals. One of the new journals that will receive funding is the Malaria World Journal, an online open access journal for malaria research of the Netherlands-based MalariaWorld Internet platform.

Advantages and disadvantages of open access

The benefits of open access are economic, social and scientific in nature:

 Economic advantages.The Committee of Economic Development (CED) in America has concluded that the benefit by the introduction of open access to NIH has outweighed the costs many times over. And in Australia it was found that open access to the information held by the Bureau of Statistics had cost $ 4.6 million in investments and yielded $ 25 million in benefits. In England, the Open Access Implementation Group has determined that the public sector has already saved £ 28.6 million by open access, and that each 5% increase in open access publications will save the public sector an additional £ 1.7 million.

Social advantages. Because access to information is the key to development, innovation and prosperity, open access also has significant social implications. Good information for citizens, politicians, businessmen and others, forms the basis for a functioning democracy. This is certainly of great importance for all non-democratic countries such as in Africa. And open access to information for patients, for example, can literally save lives. Open access to information for nurses will increase the knowledge and motivation and contribute to better care. And there are countless other examples to consider.

Advantages for science. The same CED mentioned above has said that its research also clearly showed that the research process itself was considerably accelerated by immediate and free access to the results. Commercial applications followed more quickly and there were less dead-end research projects. The quality of research was in fact found to be markedly improved by open access, probably because of much more feedback and monitoring by college researchers. In addition open access created many more opportunities for innovation because more people and especially more people from different backgrounds tried to solve the same problem. Open access also provides a solution to the so-called “local search phenomenon” in which solutions will be less than optimal as the group members are smaller in number and less diverse.

Disadvantages of open access. The only disadvantage of open access to information actually lies mainly on the side of the old school publishers that do not want change their old business models. In addition, those who do change, will have to accept significantly lower revenues. A poor quality of the publications is often claimed to be a major disadvantage of open access. According to the critics this poor quality would be the result of less selection and lack of peer review. The facts show otherwise. The extreme selection that is practiced by the High Impact journals leads to a large amount of manuscripts that are offered for publication and then withdrawn by the authors, causing very relevant publications to often just go down and end up scattered over small journals. Furthermore, open access and peer review are two different items. Many open access journals (eg PLoS and BioMed Central journals such as PLoS One and Malaria Journal) even have a more extensive peer review process, because they do both pre-and post-publication peer review of their publications. Moreover, because many more people share knowledge other new metrics can and are being used for the assessment of the quality of scientific publications, such as number of downloads, social media messages, number of pageviews and post-publication peer review. These methods have the advantage that they take into account the reality of knowledge dissemination via the World Wide Web, and especially when combined with the conventional citation index (the H index), they are a better indicator for the importance of a study than the citation index alone.

A chronology of open access

Archive.org   first open access repository                                                1991

BioMedCentral first open access publisher                                              2000

Wikipedia                                                                                                       2001

Budapest Open Access definition                                                               2002

Bethesda definition of Open Access                                                          2003

Berlin open access definition                                                                      2003

Open access publisher PLoS                                                                      2003

Combined defintion BBB open access                                                       2008

Academic Spring                                                                                          2012

  • Tim Gowers: CostofKnowledge list Boycott Elsevier
  • Organisations and government requiring Open access:
    • Wellcome Trust
    • UK Science minister
    • World Bank
    • EU horizon 2020 program
  • Access2 Research Petition to the White House

The Access principle revisited: open access and the Knowledge Commons

- May 2, 2012 in Uncategorized

In a recent article Stevan Harnad, one of the most outspoken supporters of open access, writes that peer access has priority over public access. After noting that  “for some fields of research — especially health-relevant research — public access is a strong reason for public funders to mandate providing public access”, he goes on to give a number of reasons that for other areas public access is less important.

His reasoning in this goes as follows: “most peer-reviewed research reports themselves are neither understandable nor of direct interest to the general public as reading matter” and “Hence, for most research, “public access to publicly funded research,” is not reason enough for providing OA, nor for mandating that OA be provided”.

I strongly object to this representation of the “facts”. There is no reason at all to think that science is too difficult for non scientists. There is all the more reason to believe that non scientists can and will contribute to science in unsuspected ways and on top of this are also in need of information. Thomas Insel, director of the National Institute of Mental Health, recently described two examples of citizen science that were reported at the Sage Bionetworks Commons Congress in San Francisco: Meredith, a 15-year-old high school student from San Diego, wrote this year’s breakthrough paper on modeling global epidemics. And an 11-year-old boy from upstate New York solved a problem in protein folding using a computer game called Foldit.  Another argument in favor of open access for everyone was given by Robbert Dijkgraaf, president of the Royal Netherlands Academy of Arts and Sciences, who said that science should be open because the more science is open, the more attraction it will have for future generations to become scientists. In addition we shouldn’t also forget that the public of non-scientists is a mixed group of people consisting of entrepreneurs, students, lawyers, politicians, health professionals and many others. For them access to information is not less vital than for scientists.

The heart of the matter is that science is not, nor should it be, the exclusive domain of scientists. Science (and for that matter all other human knowledge generating activities), is a Knowledge  Commons. The core of the open access debate stems from the following 3 principles: 1) access to knowledge is a human right, 2) the combined human knowledge can never be the property of any single person or organization, it is a human heritage, 3) knowledge in itself is worthless, it only becomes valuable when it can be freely shared and used. These 3 principles are closely linked to the definition of intellectual freedom as defined by the American Library Association:

Intellectual Freedom is the right of every individual to both seek and receive information from all points of view without restriction. It provides for free access to all expressions of ideas through which any and all sides of a question, cause or movement may be explored. Intellectual freedom encompasses the freedom to hold, receive and disseminate ideas.

There have been times and instances that science was the domain of (a) select group(s). However the invention of printing and the development of publishing businesses have, at least initially, done a great deal to make science accessible to the public.  Also, the time that universities were knowledge temples with exclusive rights on knowledge came definitely to an end, when in 1597 the Gresham College in London became the first Open University where access to lectures was free for everyone. This Gresham College later gave rise to the Royal Society. Thus, openness in science has started centuries ago, but has not yet led to open science.

What has gone wrong?  One of the main things that have gone wrong is, that openness has been severely compromised by the monopolization of knowledge by scientific publishers that has occurred during a great part of the 20th century. The erosion of knowledge as a commons, in the form of restricted access to information, has done great harm to science. But it seems almost certain that this is going to end real soon. Just as the invention and wide-spread use of the internet, specifically designed for sharing [information], has been instrumental in the de-monopolization of software (open source software), books (free ebooks), education (open courseware) and more, it will be instrumental in the liberation of science (open access publications and data). Scientists have been slow to adapt to the realities and opportunities of the internet, while the general public has been quick to accept and use its sharing principle. However, with a growing realization [of these opportunities] among scientists, the movement for open access and open science is gaining momentum by the day. The Cost of Knowledge, a list of scientists who have signed a petition to boycott Elsevier, one of the major global players in scientific publishing, already has more than 10,000 signatories. The time has clearly come for a change , and because the internet cannot be controlled by publishers, scientists and citizens now have an unique opportunity to communicate and share their knowledge freely through it, and through open access publishing, to finally start the new era of networked open science.

Game Over for old-school publishers

- April 24, 2012 in Uncategorized

The announcement by Harvard, or more accurately, by Harvard’s Faculty Advisory Council, must surely have come as a titanic blow for old school publishers like Elsevier.

These publishers may still have been thinking that the whole thing about open access would blow over, but the unsuspected move by Harvard, one of the richest and most prestigious universities in the world, must have told them that it is game over for their very profitable scientific publishing model.

In their announcement to Faculty Members in all Schools, Faculties, and Units, the Council stated that

“ [we] write to communicate an untenable situation facing the Harvard Library. Many large journal publishers have made the scholarly communication environment fiscally unsustainable and academically restrictive. This situation is exacerbated by efforts of certain publishers (called “providers”) to acquire, bundle, and increase the pricing on journals”.

 

Although this introduction points to a strong economic reason for this move against restrictive publishers, they go on to propose 9 options that in their view could solve the problem, all of which will prove disastrous for the latter.

For me it is especially the second option that deals the final blow:  “Consider submitting articles to open-access journals, or to ones that have reasonable, sustainable subscription costs; move prestige to open access (F)”

When scientists take this suggestion seriously, they will effectively expose what is the Achilles heel of many of the high impact, renowned journals, namely that their status is upheld by (excellent) scientists assuming they have to publish their work there. And these scientists feel they have no choice, because their peers publish their work there. This ridiculous closed-looped system can only work as long as everybody believes that they have to publish in these journals for that reason. The moment that scientists start turning away from this very idea, and move towards open access journals, the same mechanism will cause others to start publishing there as well. And the moment seems not far away.

A major factor that is still forcing many (early career) scientists to publish in established high impact journals is the belief of scientific committees dealing with appointments, tenure and grant approvals, that publications in specifically those journals matter most. For any lasting change to take place, it will be essential that these committees change their attitude as well. And guess what, they may even do so on the basis of facts. There are two major persistent myths on open access: 1) the quality of the science published is inferior to conventionally published science,  2) open access publishing is unsustainable.  Both these myths are effectively neutralized by the very successful business models of PLoS and BiomedCentral, that publish peer-reviewed open access journals with high impact factors and publications that already surpass those of many conventionally published journals in number. Other open access publishers are following suit.

With all the events surrounding open access, we shouldn’t lose sight of the main reason for wanting open access in the first place: that progress in science critically depends on the free and unrestricted sharing of knowledge.  This sharing will most probably take place in open science communities using open access information sources. The future of science has no place for any restrictive systems and certainly not for old-school scientific publishing. And because the future has just begun, it really is Game Over for Elsevier and similar publishing businesses.

The next revolution in Science: Open Access will open new ways to measure scientific output

- April 19, 2012 in Uncategorized

Open Access will not only change the way that science is done, it will also change the way that science is judged. The way that scientific output is measured today centers around citations. Essentially, on an author level this means the number of publications and citations of an author’s articles (author-level metrics). On a journal level, it means the average number of citations that articles published in that journal have received in a given time period (journal-level metrics).

For author-level metrics the Author citation Index has now been replaced by the H-Index that was introduced in 2005 by JE Hirsch. Here the criterion is the number of articles [n] that have received ≥ n citations at a fixed date. In the case of journal level metrics, the Journal Citation Report (JCR) is a databases of all citations in more than 5000 journals—about 15 million citations from 1 million source items per year. From this the journal Impact Factor (JIF) is derived from the number of citations in the current year to items published in the previous 2 years (numerator) and the number of substantive articles and reviews published in the same 2 years (denominator).  It effectively represents the average number of citations per year that one can expect to receive by publishing his / her work in a specific journal.

Although the JIF is meant for large numbers of publications, it is also often used in the evaluation of individual scientists. Granting agencies and university committees for instance often substitute the actual citation counts for the number of articles that an author has published in high impact journals. The introduction of the H-Index has diminished the use of the JIF for individual scientists but the practice has yet to disappear. Apart from this the JIF has other flaws. Imagine a journal only publishing reviews. Such a journal would evidently get a high impact factor but clearly the real impact of the published papers for the field will be much less than that from original research papers. An easy way around this problem is offered by the use of the H-Index methodology for journals. This is precisely what Google Scholar metrics does.  Because Google has only been offering this possibility since 1st april 2012, it is too early to tell whether this will become a widely accepted method for journal-level metrics.

The H-Index, Google Scholar metrics and the JIF are all rather good indicators of scientific quality. However, in measuring real-world impact they are seriously flawed. Think for a moment of how impact is felt for whatever random topic you can think of. Everyone of us  will consider the publication itself, but probably also downloads, pageviews, blogs, comments, Twitter, different kinds of media and social network activity (Google+, Facebook), among other things. In other words, all activities that can be measured by “talking” through social media and other online activities can be used to give a more realistic impression of the real impact of a given research article. Since talking about articles depends on actually being able to read the articles, this is where open access comes into play.  The use of the proposed kind of article-level metrics only makes sense when many people are being able to discuss the actual content of published articles, which in turn is only possible when articles are open access. The optimal conditions for using altmetrics would be when articles would all be published as open access, but even with the current growth of open access published papers the method is already starting to make sense.

A number of article-level metrics services are currently in the start-up phase. A company called Altmetric is a small London-based start-up focused on making article level metrics easy. They do this by watching social media sites, newspapers and magazines for any mentions of scholarly articles. The result is an “altmetric” score which is a quantitative measure of the quality and quantity of attention that a scholarly article has received. The altmetric score is also implemented in UtopiaDocs, a PDF reader which links an article to a wealth of other online resources like Crossref (DOI registration agency), Mendeley (scientist network), Dryad (data repository), Scibite (tools for drug discovery), Sherpa (OA policies and copyright database) and  more. A disadvantage of UtopiaDocs may be that it focuses on the PDF format instead of an open format. Also the system seems to be rather slow. PLoS also uses article level metrics to qualify articles by giving comprehensive information about the usage and reach of published articles onto the articles themselves, so that the entire academic community can assess their value. Different from the above, PLoS provides a complete score build on a combination of altmetrics, citation analysis, post-publication peer-review, pageviews, downloads and other criteria. Finally, Total-Impact also makes extensive use of the analysis of social media and other online statistics, to provide a tool to measure total impact of a given collection of scientific articles, datasets and other collections. Their focus on collections represents still another approach to the problem of evaluating scientific output.

The previous overview is probably far from complete, so please feel free to add other possibilities in your comments to this post. However, I do think that the description above is an accurate reflection of the fact that the field of bibliometrics is moving fast and that Open Access will provide the key to the development and implementation of better ways to evaluate scientific output. Compared with the current practices, all of which are based on citations only, the inclusion of altmetrics plus online usage statistics and post-publication peer-review in an open access world will represent a true revolution in the way that science is perceived by all, scientists included.

Point of No-Return for Open Access

- April 11, 2012 in Uncategorized

The Open Access movement is gaining momentum by the day. Just a short while ago you could still hear voices telling us that this was all  just a hype and would go away  Recent events however, have proved them to be wrong.

The call for action by Tim Gowers may have marked a point of no return for the open access movement. It almost seemed as if scientists suddenly and collectively came to realize the absurdness of a situation that they had taken for granted for all too long. Looking back it really seems absurd that scientists have provided publishers with their work for free, have reviewed these works for free, and that publishers in return have charged the scientists and others to be be able to access the information.

There has been a lot of media attention recently for open access.  The Guardian has put open access on its front page in an article on the Wellcome Trust’s move in favor of open access. Sir Mark  Walport, director of the Wellcome Trust said that his organisation “would soon adopt a more robust approach with the scientists it funds, to ensure that results are freely available to the public within six months of first publication”.

Another major event has been the announcementby the World Bank to “become the first major international organization to require open access under copyright licensing from Creative Commons—a non-profit organization whose copyright licenses are designed to accommodate the expanded access to information afforded by the Internet”. Starting april 10, 2012 the World Bank has launched a  repository as a one-stop-shop for most of the Bank’s research outputs and knowledge products, providing free and unrestricted access to students, libraries, government officials and anyone interested in the Bank’s knowledge. Additional material, including foreign language editions and links to datasets, will be added in the coming year. This move is especially significant since the bank is not just making their work free of charge, but also free for use and reuse.”

But with the increased media attention comes the danger that we may loose sight of what is meant by the term ‘open access’. With everyone starting talking about ‘open access’ as if this were one clearly defined term, it has become more urgent than ever to have clarity on this issue. It was one of the reasons for the start of the @ccess Initiative where this blog is posted.  Because open access can range from somewhat restricted (only free reading) to completely unrestricted (completely free for use and reuse) we have proposed to coin the term @ccess for free and unrestricted access to information in accordance with the BBB definition.

Another reason for the @ccess Initiative, and a matter of increasing importance, is the EASE of access to information. When more and more information will become available through open access, the difficulty of finding the right information will also increase. The use of a great number of institutional repositories can only work when all these repositories are adequately cross-linked and working together, a sheer impossible task to accomplish. A better option would be to reduce the number of repositories by limiting these to big (inter)national organisations like WHO, World Bank, FAO and others.

Another option still, and one I personally favor, can run in parallel with the last option above. This option is the storage and management of information with specialized scientific communities as I have described in another blog and on the @ccess communities page of this website. To give an example, and the one that we are actually working on: together with MalariaWorld we are developing a comprehensive database of malaria related publications. At the same time we will ask researchers to deposit their manuscripts and data in an open access repository that is linked to the database. This database will also link to open access articles. For restricted access publications we will seek to get as many manuscripts as possible deposited in the database as well. The community will eventually provide open access to all information, provide a platform for collaboration and information exchange and serve as a communication platform for everyone seeking information on, or working on malaria.  Other communities can be formed using this model. In this way we would move towards a system of interlinked scientific communities and easy access to pertinent information through these communities. This model would also maximize the chances for scientific collaboration and innovation. The combination of open access and the participation of scientists and citizens in the scientific enterprise will change the way that science is done. Networked scientific communities will have far better chances to tackle the world’s toughest problems, not in the least because open access would give equal opportunities to people in developing countries to profit from,and contribute to science. To quote Peter Suber: ““What you want is for everybody to have access to everything, and not just the north making its research available to the south or vice versa. Open access should be a two-way street.” The  proposed structure for scientific @ccess communities would be perfectly suited for this task.

Comment on the RCUK draft Policy on Open Access

- April 5, 2012 in Uncategorized

Today I have submitted my comments on the RCUK proposed policy on access to research outputs. Here I am posting these comments publicly.

 

Summary

I am very happy to see these proposed changes in the RCUK’s open access policy. Especially so concerning your policy on text- and datamining described as:

Specifically stating that Open Access includes unrestricted use of manual and automated text and data mining tools; and unrestricted reuse of content with proper attribution

 

I do have strong objections to the acceptance of delayed open access as a valid form of open access. This may be a compromise so that (certain) publishers will accept the policy, however there are enough open access publishers that do not impose an embargo and I don’t see why we (scientists) should give in to the wishes of a specific group of publishers. For me any embargo is obstructing the advancement of science and the timely sharing of knowledge and should thus not be part of open access. I personally would also welcome it when you would refer in your open access definition to the Budapest or to the BBB definition, as we do on the website of the @ccess Initiative where I am an member. Finally, I would like to see more collaboration and cooperation with the EU digital agenda which in my view runs a same course as RCUK.

A few more comments and suggestions for some of the proposed changes are listed here below.

(2) What do the Research Councils mean by Open Access?

Search for and re-use the content of published papers both manually and using automated tools (such as those for text and data mining) without putting restrictions on the amount of data , provided that any such reuse is subject to proper attribution.

(3) How is a Scholarly Research Paper made Open Access?

but in practice the Research Councils will accept that access may be restricted to comply with an embargo period imposed by the publisher

 embargo period is not acceptable

(4) What do journals need to do to be compliant with Research Council policy on Open Access?

a) This may require payment of an ‘Article Processing Charge’ to the publisher

I recommend adding a note on what is an acceptable charge because this should not be left open

(5) What Research Outputs will be covered by Research Council Policy on Access to Research Outputs and where should they be published?

No comment

(6) When should a paper become Open Access?

In future, Research Councils will no longer be willing to support publisher embargoes of longer than six or twelve months from the date of publication, depending on the Research Council

Delayed open access not acceptable (see summary above)

(7) How is Open Access paid for?

Research Council grant funding may be used to support payment of Article Processing Charges to publishers

I think that the policy to have open access papers paid from grants is a good one. I would however impose limits to an acceptable APS see comment under (4)

(8) Acknowledgement of funding sources and access to the underlying research materials

The underlying research materials do not necessarily have to be made Open Access, however details of relevant access policies must be included

In my opinion that the underlying research materials can not be seen as separate from the results of research, they should fall under the same rules and should be open access. In fact the underlying data are vital to be able to judge the quality of research. However, I do recognize the need for exceptions for some datasets like patient medical data, but for these cases a list of exceptions would be sufficient.

(9) Implementation and compliance

No comments except for agreement

 

Dr. Tom Olijhoek
Independent researcher
Consultant Dutch Malaria Foundation
Ass. Editor MalariaWorld Journal
Group Coordinator @ccess Initiative
1336GB Almere
The Netherlands
tomolijhoek@malariastichting.nl
@greboun