Saturday, May 16, 2009

Rankings of "Open Access"-journals: PLoS ONE rocks!

As several of you might know, I have been complaining about how conservative attitudes some of my colleagues have towards "Open Acess"-publishing in general, and towards PLoS ONE in particular. This is quite frustrating, as me and several of my colleagues in the Editorial Board are doing our best to promote this new "revolutionary" journal, with the hope that it will in the long run change the entire publication landscape - to the benefit of both scientists, readers, taxpayers and the general public.

There are two classical objections against PLoS ONE by ecologists and evolutionary biologists. First, some of them are afraid to publish in PLoS ONE since it is not yet listed in Thomson's databases (ISI), such as "Web of Science". This is the database from which Impact Factors (IF:s) are calculated, and Thomson actually has a monopoly (!) on how to calculate IF:s. Since Thomson has up until now refused to list PLoS ONE in their data-base, some of my scientific colleagues are afraid that work published in that journal will be "forgotten" or not appreciated by the scientific community.

This objection partly shows a lack of knowledge and the misunderstanding about citation data-bases as reflecting some kind of "objective truth". In reality these data-bases are run by commercial companies with their own agendas. ISI is certainly not the only data-base, and it is one of the slowest to list new publications and it does only cover a small minority of all scientific journals. Scopus, for instance, covers more journals and so does probably also Google Scholar and PubMed. Thus, there is luckily severe competition among different data-bases, and hopefully ISI will soon become outcompeted and run out of business by better and faster alternatives with better litterature coverage. ISI is for the scientific world what Microsoft is for the computer world: a mean big company that we should all hate!

The second objection is that PLoS ONE has a publication policy that aims for "technical quality", rather than arbitrary and subjective criteria for acceptance of papers, such as "novelty". In that respect, PLoS ONE differs significantly from all other journals, including Science, Nature, PNAS and PLoS Biology. The philosophy of PLoS ONE is that it is the future scientific readership that should judge whether a paper is "significant" or not, not a few subjective referees or journal editors. The scientific process does not end with the publication of a paper, it starts. This is when a paper is read, discussed and (hopefully) gets cited and thus "accepted" as being important by other scientists.

Some researchers consider this as a weakness of PLoS ONE, and fears that it will become a "dumping ground" for poor quality papers that have not been published elsewhere. I, and many others, on the contrary view this as as strength of PLoS ONE, and I can honestly not say that I think that PLoS ONE has become such a vehicle for bad papers that some feared that it would become. But I am of course biased in my views, since I am involved in the journal, and it is up to others to decide about this.

Given the inherent problems with impact factors and how they are increasingly becoming "corrupt" and the arbitrary parts of traditional publication (biased referees, commercial data-bases, unfair editors etc.), I think we probably all agree that there is a need for newer ranking criteria of journals. These criteria could be based on things like number of downloads of articles, number of citations, more or less informal "ranking lists" by the scientific community, blog coverage, coverage in media etc. None of these rankings are likely to be perfect or reflect the final "truth", but they would provide a nice complement to the traditional measures, such as impact factors of journals.

The blogger "The Open Source Paleontologist" have done some such ranking lists of OA-journals, and you can read about them here, here and here. Although these ranking lists have their limitations and only deal with the paleontological science community, they are nevertheless interesting and revealing. I predict that we will see many more of these lists in the future, and I bet that the traditional "impact factor" hysteria, will soon go away (to the benefit of all science).

Not surprisingly, and pleasingly to me, PLoS ONE does very well in these ranking lists: it is always in the top 15 list of journals, and often among the top 5. Way to go, PLoS ONE!!! I am delighted. And the young scientists among you who reads this should of course not be afraid of publishing in PLoS ONE in the future, it will benefit your careers.


  1. McDawg:

    Thanks! I am glad you found this blogpost interesting, and that you have found our blog in the bloggosphere. Welcome! and I hope we'll see you more here in the future.

  2. Good post Erik! The sooner people realize that *where* something is published doesn't matter, only *what* is published, the sooner what we now call "journals" will disappear.

  3. Good points I have gotten similar responses after publishing in PLoS One:

    It is hard to try to beat ISI and Impact Factors, but just as the Open Access movement has eloquently shown it is possible to beat an existing system!

    Keep publishing in Open Access journals including PLoS One, and things will change for the better!

  4. nice blog Erik,
    I agree with you, we have to support these kinds of journals and fight the conservative esablished publishing system. And of course the only way to do that is to keep publishing in such very good journals as PLoS ONE, something that as a PhD-student, am not afraid of...
    Concerning the IF game, i think it is just a matter of time before we stop worrying sbout these indices such as IF, H-idex and other things... I have never reviewed a grant application of course but i would guess that more and more people are more interested in evalutaing the papers in themselves than the journals where they were published when they look at candidates...

  5. Very nice piece, but I feel I have to play devils advocate.
    The views expressed in the comments section, although I largely agree with the ideal, do not reflect the reality of the situation. To me IF and citation indexes are a necessary evil. We must all acknowledge that science is highly competitive and there are far more scientists than there is funding and positions to fill. Therefore we need ranking systems that allow us to judge the general quality of work produced by an individual. IF and citation indexes DO play a role in science, purely as they allow us to valuate, at a glance, the general quality of a journal and the work within. Fabrice mentions reviewing grant applications, but how can you expect people to not only judge the application, but familiarise themselves with the applicants catalogue of work? Even if you select the 5 best papers, 5 applicants, that's 25 papers on top of the applications. I know the current system is flawed, many weak scientist triumph and many good ones fall by the wayside, but until a better system is introduced, we will all have to suffer under the current conditions. PLoS one should be applauded for the initiative they are showing in trying to change the system, but a better ranking system is still a ranking system which will be used to judge us as scientists. As far as the actual work within a paper, time will separate the wheat from the chaff.

  6. i agree with the fact that a ranking system is still a ranking system but then if we need a system to evaluate grants, we need a system that ranks your work or the scientist, not the journals... i have thought of that in the past, and one way for example would be to evaluate how the work has impacted the scientific community compared to the IF of a journal... for ex, if you published in Am Nat 2 years ago and your paper has been cited 10 times, you have a Impact Ratio of 2, which would be good... of course they would be caveats, if for ex you publish all your works in very low IF journals, you will get an average impact ratio which will be very high, so you could ponderate that by for ex the average IFs of your publications... all this might seem a bit nerdy butmy point is that we should stop the kinds of reasoning that says that says that if you have published once in Science or another very high IF journal, then your carreer is safe and grant reviewers wont even look at your other publications... personnally i would rather have one very solid and highly cited and respected publication in Evolution or Am Nat than one very short and shiny but shallow publication in Science... (although of course i will not say no to both :))

  7. Tom is correct: ranking systems will always be needed, and that was never questioned in this bloggpost.

    If we agree that ranking systems are necessary, the question is of course WHICH ranking systems we should use. There are good and bad ranking systems. I think citation indices for AUTHORS are far better ranking systems than citation indices for JOURNALS. After all, the importance of an article should be judged by the number of citations it gets, not where it happens to be published. Right? Although there is SOME correlation between journal IF and citation rate of articles, that correlation is far below 1, which shows that IF:s of journals is a rather weak predictor of article citation success.

    To give an example: a paper published in (say) "Nature" that has only got 15 citations after 10 years, is that a more "important" paper than a paper in (say) "Evolution" that has been cited 100 times? I would say NO, and I know several examples of where this has actually happened in our field.

  8. I fully agree that a supplementary system to ISI is needed. A system where "interest" is shown by readers would be another measure for the impact of the cited research work. However, the concern highlighted by some colleagues in your articles is worthy to note. Scientific research, in my opinion, cannot be effectively screened by internet download and blog. There is the problem of "trust" in any data reported. Despite the "political" bias that may influence decisions about an article, referees are the most effective direct tool to detect scientific misconduct and relevance of the reported data to current or future research funding opportunities. This way important scientific research would keep its pace forward. "Trust" is also attained by person to person contacts in conferences and through arranged visits to each one's institution so that research work is closely scrutinized. No wonder many do not feel comfortable in "trusting" reported articles in PLoS ONE. To prove it, we submitted recently an article to PLoS ONE only because we could not get it accepted in forefront journals. Not because we do not believe that you are doing good work by bringing up novel and challenging ideas to the current system, but because it is unfortunately a fact that one needs to publish or perish!. I think an added value to the journal would be in ensuring that articles are scrutinized by scientists acting as referees in addition to the system highlighted in this article. You may be popular for a period of time due to difficulty that many are experiencing in getting research funding, but eventually scientists would still prefer the older system in addition to your current one.