Today, I’m writing to tell you why I’m not a fan of the Journal Impact Factor (JIF). So first let me tell you what the JIF is and where it comes from. The Journal Impact factor is the number of citations generated by each citable item (such as an article) as calculated over a length of time. The standard time is three years and the number of citations in the current year divided by the number of citable items over a period of two proceeding years. The JIF also takes into consideration the size of the journals (number of articles). So, the 2021 journal factor looks at the 2021 citations of articles published in 2019 and 2020: A (number of times articles were cited in the year 2021) divided by B (the total number of citable items published in the journal in 2019 and 2020 gives us the 2021 impact factor. For example, 640 citations (A) generated by (B) 240 articles (10 articles for a journal published monthly 10x12x2=240): 640/240 = 2.66. Does the number 2.66 really tell us what groundbreaking research was published in the journals in 2019 and 2020? This might be a simplistic take on the JIF, so let’s back up a step to talk about why it was created in the first place.
Where did the Journal Impact Factor (JIF) come from and why?
Libraries have never been able to afford to buy, house, and provide access to all of the literature (this includes books and journals) that is published and that their readers (staff, faculty, students, general public) might want to have access to. Because of this, we have spent years trying to determine the best way to assess how we should make our purchases. Thus, the field of bibliometrics has developed: beginning in the late 19th century when statistics were introduced in libraries, specifically statistical bibliography for assessment purposes in terms of the outputs of collections of published books and journals. In the 1920s and 30s, bibliometric laws were introduced to the field, including Lotka’s law of productivity and Bradford’s law. The post-World War II period brought quantitative approaches to measuring scientific outputs and there was a general belief in the progress of science and the ability to measure its impact. In the 1960s, Eugene Garfield and the Institute of Scientific Information developed the Journal Impact Factor (JIF). The JIF was developed, not necessarily to rank which journals authors should try to publish in, but rather to help determine libraries to which journals they should subscribe because their readers are more likely to want to read them, as they are highly cited and therefore read. It’s not that the JIF was a bad tool, and it makes sense why authors would look at a JIF and make the same conclusions librarians did, but it is too simplistic of a tool (in my opinion) to be used in the various ways to which it is today. Garfield (1999) himself wrote “Like nuclear energy, the impact factor is a mixed blessing. I expected it to be used constructively while recognizing that in the wrong hands it might be abused.” (Journal Impact Factor: A brief Review. Canadian Medical Association Journal, 161 (8), 979-80).
So, how is the Journal Impact Factor (JIF) used, and why don’t I like it?
Well, there are multiple uses, and various reasons I’m not a fan. The JIF is used to evaluate the impact of journals, evaluate the impact of scholars (which is then used as an indication for promotion and tenure), and it is therefore used as a surrogate evaluation tool. For example, if an author’s work is accepted in a journal with a high JIF, does that really indicate the quality of the author’s work? They might publish a certain number of articles in a prestigious journal, but that’s not necessarily indicating that the author is highly cited, or that their work is impactful to their field. There are, of course, other journal-level metrics, such as a journal cited half-life and the Eigenfactor Score, as well as author-level metrics, such as the H-Index and i10-Index (Google Scholar). There are also alternative measures or altmetrics, which have become increasingly popular over the years. I do not think any journal or author should be evaluated on times cited alone, I believe it’s too simplistic. On the other hand, using a number to indicate prestige of a journal or the number of times an author has been cited is a simple way to compare journals and authors who vastly differ from one another. I think, as Garfield mentioned, that the Journal Impact Factor is a tool that was meant to be used for one purpose and has been shoe-horned to fit another purpose altogether. I will leave y’all with a link to a tweet that discusses, among other issues in publishing, the issue with journal impact measures.
If you have questions, or would like to learn more, please contact me: alexa.hight@tamucc.edu .
Alexa Hight
Scholarly Communication and Copyright Librarian