Can we ignore the journal impact factor?

Published: Posted on

Journal metrics have been in the news again recently with the Nobel Prize Facebook and Twitter accounts sharing a video of 4 Nobel Laurites speaking out against the use of journal impact factors to judge research quality. Do you know what the issues are and how they can be used. Karen Clews and guest blogger Vicky Wallace look at how the measures has developed and how you can use them responsibly.

The arguments made in the video are not new; we have heard the same concerns for a few years now. The San Francisco Declaration on Research Assessment has been building momentum this side of the Atlantic recently with several institutions publicly signing up in the past 12 months.

There are dissenters though, those who recognise the reasons that the impact factor has stuck around even in the face of these challenges. Merlin Crosley’s response to the video reasons that while he agrees with the principles in theory he “cannot advise the junior researchers I mentor to ignore Impact Factors”.

Yes, Merlin is right, statements like these don’t remove the use of journal level metrics overnight and young researchers have to be able play by the same rules as everyone else. Of course, the Nobel Laureates are saying that the JIF should not matter, but, I’m sure they would also recognise that there are still large parts of the research environment where there is still a focus on this measure so we can’t dismiss it completely.

We have to work together to make sure that journal level metrics are not used to replace expert opinion and peer review

As research organisations we need to recognise the challenge in moving away from the impact factor. Birmingham is one of the institutions who have signed up to DORA, confirming that the institution will work to ensure journal level metrics are not used in internal processes as a “surrogate measure of the quality of individual research articles”. Signing up to things like DORA is the easy part, making sure that we uphold the values it encourages is harder. We have to work together to make sure that journal level metrics are not used to replace expert opinion and peer review and to promote the “basket of metrics” approach when using indicators and metrics to support these processes.

As researchers we must also take responsibility for pushing this change forward. Tweets like this one from high profile accounts can help to raise the issues but we need to continue to challenge bad practice on the ground when we see it. If funders or publishers are judging you solely on the impact factor of the journals you’ve published encourage them to look at other qualitative and quantitate indicators that show the real value of your work.

What you need to know about journal level metrics

Journal metrics measure how often a journal’s articles have been cited by other articles, and averages it over a publication.  They can be used to compare journals in a given academic discipline, and were developed to show which journals were publishing the best research. As suggested above there many who challenge the use of journal level metrics, but they can be useful in some situations, if used responsibly.

When finding a journal metric, you need to use a database that indexes your journal, along with the publications that cite its articles.  There are 3 large databases you can use to calculate journal metrics:

  • Web of Science provides the Journal Impact Factor (JIF) – probably the best known journal metric and the Eigenfactor score
  • Scopus provides a range of journal metrics. The CiteScore – measures average citations received per document published in the serial, the SNIP – Source Normalised Impact per Paper – similar to citescore, but allowing cross disciplinary comparison as the score is normalised for expected citation rates by discipline and the SJR – Scimago Journal Ranking SCImago Journal Rank measures weighted citations received by the serial. Citation weighting depends on subject field and prestige (SJR) of the citing serial.
  • Google Scholar and Publish or Perish provides Figures summarising citations, citations per year, citations per paper, citations per author, papers per author, authors per paper a journal h index, and some h index variants

All of the above give rise to a ranking of journals with the most highly cited journal at the top of the ranking list. Rankings vary depending on the exact formula used, and also on the source data which is used, so that different measures may have different journals at the top of the ranking.

As discussed above there have been many issues raised about the use of journal level metrics:

  • They are based on a mean score, so can be skewed by a single highly cited paper. Checking the impact of a journal over several years will help you to be sure that a high ranking is the norm rather than an anomaly.
  • Most journal metrics cannot be used to compare journals across different disciplines as different citation patterns exist between disciplines e.g. Economics papers tend to use fewer citations than Medicine papers.
  • The timespan used to calculate the average number of citations is arbitrary, with 2, 3 and 5 years used; and different disciplines may need different timescales.
  • Review journals (that is, journals consisting of review articles) have high numbers of citations
  • There may be abuses of the system e.g. self-citations or citation stacking.
  • Not relevant to disciplines where outputs are not journal articles.
  • Not relevant to disciplines where it is not usual practice to cite extensively.
  • Negative citations are counted in support (that is, if an author criticises an article, this reference is counted in the same way a supportive reference is counted).
  • There are various opinions about whether a journal metric can be used to judge the quality of an article – although it may be the case that papers in a journal with a consistently high impact factor may be high quality due to a rigorous editorial policy, it does not necessarily follow that papers in a journal with a lower impact factor are of lower quality.

Quick tips for using journal metrics as part of a publication strategy

  • Look at other scores e.g. % of papers cited – this will give you a better idea of how likely your paper is to be cited.
  • If you are going to use the Journal Impact Factor or CiteScore to inform your publication strategy, be sure to check that a score has been consistent over recent years, and is not an anomaly
  • Better still, use a measure that has been field weighted for discipline, e.g. the SNIP, available through SciVal or Scopus.
  • Be aware of the flaws of using journal metrics, particularly in relation to your own discipline.
  • ALWAYS make sure that you use a journal metric with caution and in conjunction with other metrics and expert opinion.

Find out more about using research metrics and the support available at Birmingham on the Library intranet pages

 

Leave a Reply

Your email address will not be published.