Thursday, June 4, 2009

PLoS launches new campaign on Article Level Metrics (ALMS)

























As many of you are aware of, I am quite critical of the "Impact Factor"-hysteria when it comes to journals. It is very weird, I think, that an article's importance should be measured by where it was published. To be sure, there is a correlation between the Impact Factor (IF) of a journal, and the number of citations of the average article there (per definition), but the correlation across all articles is at most moderate (around 0.5, if I remember correctly).

This high variance means that many individual "low-impact"-papers are regularly published in "high-impact"-journals like Nature and Science. These papers receive few citations, many of them fewer citations than papers published in more specialist journals in our field such as Evolution or American Naturalist.

Clearly, something is wrong here. Should a person who is lucky to get a paper in to Nature, but which is not cited a lot, be offered a job or a research grant, while a competitor who have published a much more cited article in a "normal" journal not get the job? Clearly not, if you ask me. What should matter, ultimately, is the citation rate and importance of individual or articles or authors, not journals.

This is where I think scientific assesment will move in the future. We can think of, for instance, the increasing use of the "h-index" to evaluate individual scientists, which seems to replace the length of publication lists or the journals where people have published as a criterion to decide who are the "best" scientists. The general message should be: try to publish fewer, but better papers, because you will be evaluated as an individual and there are no shortcuts or easy ways of cheating these new measures.

Along these lines, the new and rapidly growing OA-journal PLoS ONE has just launched a campaing for Article Level Metrics (ALMS). The staff at PLoS ONE hopes that these new measures, which includes per-article performance measures such as citation rates, number of downloads, coverage in media and the bloggosphere, inlinks and other measures of "importance" will outcompete old-fashioned journal Impact Factors (IF:s). IF:s are increasingly subject to criticisms, and I would not hesitate to call them old-fashioned and yesterday's hat.

Just as the "h-index" quickly became popular and established itself as a new evaluation tool of individual scientists and their performance, ALMS will hopefully also contribute to deconstruct and remove the "monopoly" of IF:s and the naive use of them by hiring agencies. The journals that should worry most about this should be the traditional high-IF journals like Nature and Science. Their whole existence and high prestige has been built more or less solely on the absurd IF-system, and if starts cracking down, their future might be in jeopardy and they can certainly not take anything for granted. Here you can follow a "Webinar" where the Editor of PLoS ONE (Peter Binfield) explains more about ALMS.

15 comments:

  1. in my opinion, this is a great initiative, and i hope it will be followed by other journals... i just have one minor thing to criticize: the number of downloads does not really say if a paper is of good quality or not, since a flashy title promising something that is not really supported by the data would get a lot of downloads, but few citations... however it is probably still a good proxy for quality for papers in their ealry life, as of course you usually have to wait 1 year before a paper gets cited, so the nmber of cittions is only really useful after a couple of years as far as i am concerned...
    but stull very exciting to see PLoS ONE as one of the least conservative journals on the market, and very dynamic on the WEB, which is likely to be a fruitful approach....

    ReplyDelete
  2. I still think the number of downloads might reveal more than a "sexy" title, as people would not necessary download something after reading the abstract and if it seems boring. Anyway, a minor comment, we probably agree on most. It would be interesting,when data is available, to look at the correlation between citations and downloads, of course.

    ReplyDelete
  3. I have disliked impact factors from day 1. I may not have a completely representative background, but the concern over IF seems to me stronger in Europe. I rarely heard much about it before moving to Sweden, and I had never considered my papers in herpetology journals to be of lesser quality (to the contrary, I am quite proud of them).

    In my view, the IF and other rating systems are not even necessary. New metrics will always be distortions of reality in different ways. Perhaps 10 or 15 different metrics, considered simultaneously, is of some relevance. But why reduce a journal, or a scientists impact, to a single stupid number? Has that strategy ever worked? Does IQ predict success? Sticking to scientific publications: are the medical journals better because they are cited more? Are people in fields in which papers are readily produced (read: not ecology) better scientists? Isn't an h-index a biased measure of a scientist, because it is so correlated with age (maybe we should use it starting 10 years after a scientist has retired?). Scientists from countries that have better funding for graduate students will probably have higher h-indices; are they "better"? It Wynne-Edwards great because his book is a classic citation for group selection run amok (in his defense here, I've heard the book is pretty interesting if you actually read it).

    I'd venture that, without impact factors at all, we all know what the good journals are, we all know what journals our colleagues read, we know good science when we see it, and we all have a feeling for our own level of accomplishment. I don't think we need any of these numbers. Let the publishers worry about them if they must, but I won't.

    ReplyDelete
  4. OK, quick extra note: aside from what I wrote (preached?) above, I do applaud PLoS ONE for thinking out of the box and developing an alternative set of metrics. It seems to me that this is a positive development, and I agree with Erik that this is good news. Long Live PLoS ONE!!! (Next time I spray paint graffiti on the side of a 1000 year old church, that's what I'm writing.)

    ReplyDelete
  5. Thanks Shawn - although we certainly don’t condone graffiti, we do appreciate the sentiments :) Just a couple of notes on the discussion above – we also agree that the preferred solution is for people to read articles and form their own opinion, but I think that the widespread adoption of the Impact Factor proves that some people would rather use a metric instead. Our hope is to give those people something more meaningful to use when considering a specific article or author (as opposed to an entire journal). In addition, the existence of additional metrics at the article level will allow people to sort, filter and cross-compare when performing searches - which should allow them to get to appropriate content more rapidly and effectively.

    Fab, as to your point about whether or not usage correlates to citations (or even to actually reading the paper!). Usage will be just one of the things that we provide, however it is worth mentioning that other investigators have found some degree of correlation between citations and online usage. For example, see: http://www.journalofvision.org/9/4/i/

    We expect that once data like this is actually made available to the community, new ways to consider and evaluate the data will be developed.

    Pete Binfield
    (Managing Editor of PLoS ONE)

    ReplyDelete
  6. Pete:

    Great to see that the "big" names from PLoS ONE has now discovered this blogg! I am quite flattered...

    ReplyDelete
  7. A direct quote from the article that Pete referred to, about the relationship between the number of citations and the number of downloads:

    "The correlation between these two quantities is 0.74, indicating a strong positive relationship."

    Indeed!!!

    ReplyDelete
  8. Erik, every word in this article is correct! I take a big effort to confront to IF measure of scientific value. It could be some other way of metrics, better than uncontrolled chasing points to build a career collecting articles published in hi-ranked journals with IF.
    What is the message for young generation?

    ReplyDelete
  9. As one of Erik's protege I have a big place in my heart for PLoS one and all it stands for. I am very pleased with the paper I published there, and this will always be so. My comment is more of a different perspective.

    Although I agree with most things on this point, one thing that is missing is why do we need/use such metrics? In my mind this will change depending on where you are in your career.
    As an early stage scientist, just starting a post doc, they are a necessary evil, as they add weight to my cv, which in turn will (hopefully) help with funding etc...
    If money was no object we could all agree with Shawn, nothing would change, we would read the same articles and journals, and science would move on very nicely thank you very much. But there isn't and we are all fighting for our place at the feeding trough. This is even harder when you consider that the early reception of an article doesn't tell you the true value of the work within and more likely is a reflection on the length of the paper (It is easier to read 5 pages of guff, than 30 thoughtful, conceptually strong pages).
    IF, as it stands, doesn't influence what I read or find interesting, it also doesn't reflect the standing I give certain journals over others in what e-mail alerts I sign up for. BUT when my application for money is sent out to those outside the field, how do they judge my work? Even within the field, I am unknown (although I am sure Erik's name on the papers helps), I do not have a huge catalogue of work, over many years, which can be used to judge my standing as a scientist. So journal level metrics, for me, allow my work to be judged by the quality of the publication in which it was accepted. For many of my peers, seeing I have published in Am Nat and Evolution (although none of these papers have been cited by anyone but me. I'm not even sure they have been read by anyone but me) will give me a boost, as they understand the level of work needed to get into these journals. For those outside, the standing of these journals will also assist me in my quest for my own lab.

    I do applaud PLoS one for the revolution they have started. I agree that many of the ideas will work extremely well in the future, but remember there is a baby somewhere in the dirty bathwater.

    ReplyDelete
  10. To quote Tom:

    "So journal level metrics, for me, allow my work to be judged by the quality of the publication in which it was accepted. For many of my peers, seeing I have published in Am Nat and Evolution ... will give me a boost, as they understand the level of work needed to get into these journals."

    I agree with this statement, in that I believe that many people do think that papers published in Evolution and Am Nat (or Science or Nature) are better than papers published in other journals. But I don't agree with this line of thinking. Certain kinds of science are needed for these journals. My best paper (in my opinion) was published in the Biological Journal of the Linnaean Society (not a very good IF). Papers in the Journal of Morphology may not be much cited in the first two years, but have lasting value (e.g., the "half life" of Molecular Ecology is lower). A very well done systematic revision can't go into any of these journals. John Wiens (a science hero of mine with a systematic inclination) hasn't published in Science and Nature... is his work of less quality? No, he's a stud (OK, now I hope John isn't reading this blog!).

    The point is that I do believe that people use IF to judge the quality of a scientists' work, but I view this approach as too flawed to be of reliable worth. Before the IF revolution, people more generally focused more on publishing in the journal that seemed to have the appropriate readership. Is it so idealistic to think this was a sounder approach?

    ReplyDelete
  11. Shawn:
    Talk about quote mining! The comment was made in the context of applications for money at an early stage in your career, and had nothing to do with the quality of science or work within them.

    The fact that John Wiens (both of them) have had long successful careers kind of mutes your point. You made a point earlier that we all respect different scientists regardless of their publication record, and I agree. But, we have to acknowledge that many people don't make it (I could easily fall into this category in a couple of years), there are far more phd students and post docs than there is funding and positions to fill. How then do you judge a pile of applications? Thoughtfully read each paper produced by every applicant, or quickly glance at their publication list?

    I do not think the IF is a good measure of science, and the chances are that your applications will be read by some of your peers, who have hopefully read some of your work. But considering the amount of work produced and new journals appearing all the time it is difficult to keep up. This has to be accepted, your sentiments are great, but I don't think they represent the field as it stands today.

    ReplyDelete
  12. Peter, thanks for your input, the correlatio is indeed very high, so yes, this is probably a good metric... as to whether or not we should use metrics, (since we seem to more or less all agree that in thus case, article metrics and not journal metrics are best), I will add one minor poit to the discussion started by Tom and Shawn... The thing is that some people will judge parts of your publication lists based on IFs and such metrics, and others will take the time to read some of your papers, but more in general, there is a need for us scientists, to always measure things and quantify and compare, including when reviewing grant applications... and any kind of metrics will at least provide a way to compare people with numbers... I personnally think it is sad, but it might be the most efficient way to evaluate an application, instead of saying "i will grant this person because i like the general impression of her application". We need to rank applicaitons, and for this we need good metrics... of course, this does not preclude other essential aspects of grant reviewing, and we should not solely base our evalutaiton on any kind of metrics, we still need to take the time to learn more about the person applying, and take in account the project, etc. and of course read some publications, instead of reading only the "metrics"...

    ReplyDelete
  13. To sum up this interesting discussion:

    I would be very hesitant to rely on a single metric-measure, being it impact factors or the number of downloads, blog entries or whatever! I also agree with Tom that measures as the h-factor are not very useful for early-career scientists, as the number of publications etc. Perhaps young scientists are therefore better evaluated, than more senior ones, as there will be room for more subjective evaluations? Who knows.

    In any case; my own position is that these kind of metrics should not be used by themselves or in isolation from subjective evaluations, but rather as a complement (and not necessarily always).

    I do see another use of these metrics, though, which has less to do with the "quality" of the articles (whatever that is). In the information age with A LOT of publications, the greatest difficulty is of course to "navigate" through the litterature. One way might be to do a citation analysis to find the most cited papers (e. g. reviews) and go from there to look for other papers. These highly cited papers might have SOMETHING to say, although it might not necessarily reflect quality of course. But at least one can use citation analysis to find them in the first place, and then critically evaluate them, and look for other papers connected to these papers.

    The number of downloads has the advantage that it can be used at an earlier stage than citations (which take many years to accumulate), and also that tools like "Google Analytics" can be used to find "hot" papers at an early stage.

    Again, this is more a pragmatic view on citation metrics, not necessarily dealing with the philosophical issues of "quality". However, if we want to stay up with the litterature, we might want to know which papers are highly cited, just because we might want to know WHY, and also we might want to disagree with these papers.

    The fact that a paper gets many downloads or is cited a lot surely says SOMETHING, and we will never be able to read all papers anyway. So my advice would be to be more pragmatic about these measures and not take them too bloody seriously. They are tools, for us, not the absolute truth or quality indicators.

    ReplyDelete
  14. This comment has been removed by the author.

    ReplyDelete
  15. ALMs would imply that all journals, I say ALL, should be open access!

    ReplyDelete