Serving the University of Toledo community since 1919.

Rashid: Research is more than the journal it’s published in

Maisha Rashid, IC Columnist

The value of a piece of scientific research or discovery lies in its potential for technological, medical and environmental advancement. Thus, the impact of the research lies in the extent of its progressive potential.

Unfortunately, the significance of research articles is assessed by the impact factor of the publishing journal rather than the paper’s inherent potential. The impact factor of an academic journal is essentially a measurement of the number of citations received by recently published articles in the journal.

Releasing information in a published article requires holding onto inherently important pieces of experiment data until you can make a complete story for an article. This impedes the pace at which it reaches the community.

Instead of being motivated by the importance of making the discovery in itself, scientists are desperate to publish in the highest-ranking journals. Their passion and motivation is misguided, and I can personally attest to that. The situation, in my mind, is best described by a Hollywood analogy where, instead of playing a meaningful, complex and enriching character, an actor chooses the role that’s sure to garner the highest number of ticket sales.

In response to such misguidance, scientists and science magazines across the country have signed the DORA initiative. The DORA initiative is the San Francisco Declaration on Research Assessment, a recognition by the science community of the need to improve the basis on which research and, essentially a scientist, is evaluated, and they argue strongly against the weight put on impact factors as a way for evaluation.

Being a scientist in the making, I refuse to let my research be deemed less-worthy because I didn’t publish in a journal with a higher impact factor.

The impact factor is a number assigned to journals calculated by dividing the number of times articles have been cited over the total number of articles published in the journal in the previous two years. Following a Ph.D., as we set out looking for post-doctoral positions, our intellect and capabilities are assessed based on the impact factor of the journal we have published in. Thus, the journal impact factor (JIF) determines how competitive our resume is, as well as determining whether we get our dream job or not.

Hashem Dbouk, post-doctoral fellow at the University of Texas Southwestern, Department of Pharmacology, reiterated similar feelings in an e-mail when asked about this problem, “We all come into science wanting to follow our passion, but the majority are pushed away due to the unrealistic requirements and expectations to get grants and land an academic job.

This breeds the increased, and highly detrimental, competitiveness among scientists as well as drives the lopsided publication model based on impact factor.”

My contentions rest here: this pre-occupation with grant funding and getting a competitive edge is destroying our passion for science and research while encouraging fixation with the JIF — which is a misrepresentation of the potential of scientific articles and individual researchers.

Personally, I think making a discovery is similar to creating a piece of new music or art. They are all driven by the willingness of one person to give up personal time, investing hours of sweat and working extremely hard towards revealing a unknown truth about our world, in turn, creating a “masterpiece.” As a result, it is futile to expend time in constructing and amalgamating pieces of experimental results into a story for a paper in an effort to publish in a journal.

Any new finding serves as a base for the next scientific breakthrough, drawing us closer to solving terminal diseases such as cancer. So why waste time and delay scientific progress if as scientists, we should just care about bringing the new finding to the world?

Maybe we should just have smaller poster sessions and presentations where we provide regular updates of our research. This allows more extensive critical assessment and transparency of our research. We avoid figuring out problems and generating new hypotheses with only a small lab team and instead obtain valuable advice from a whole community of scientists that will see our posters and presentations.

Through this system, we avoid losing our focus as “creative people,” a somewhat unorthodox, personal description I use for us scientists. We are then that much closer to completing our “masterpiece” and other scientists can start working on the topic immediately instead of having to wait for our paper to come out.

Research is a field that needs critical thought and perseverance for its sustainability. In order to accomplish this, factors such as competition and obsession with grant funding need to stop being distractions. The JIF can’t be used as an “article level” metric — a way to assess an individual, mutually exclusive article in a journal.

It’s time for a cultural change in this misguided and disoriented system.

Maisha Rashid is a second-year graduate student studying cancer biology.

Print Friendly

Comments

  • Srimathi Kasturirangan

    Hey, a very interesting read! I almost entirely agree with your views. The phrase “publish or perish” remains ingrained in our minds and tends to be the driving factor for most if not all research these days. Unfortunately, the spark to learn for just the sake of gaining knowledge has been effectively stamped out. I see what u mean when u say we worry about the impact factor of the journal we publish in, rather than just publishing our “masterpiece” (as u rightly call it) to the world. In my opinion, man is a social animal and appreciation from peers is something he yearns for. So everyone wants to publish in the best possible journal they can, this would validate their work! I guess its important to let the passion of research drive you, but its also important to strike a balance with the rate of publishing and the jif so you can put out the best “masterpiece” possible. If we don’t look into jif s, then how would we distinguish the most accurate papers from all the rest?

    [Reply]

  • Maisha Shababa Rashid

    Well the JIF, as I mentioned, isn’t an article level metric . So it does not determine the legitimacy of an individual article. An individual article is legitimized by its actual results. You have to critically assess the results of the actual article to decide if you think you believe the results. And this was exactly what I’m saying. A JIF is not meant for evaluating your research paper (look at how I said its measured). And the “best” journal isn’t one that is published in the highest JIF journal, because a more prevalent disease model or signalling pathway can easily get the highest cites and increase JIF of a journal….in that case are you saying rare diseases ,a nd new signaling pathways aren’t good enough? And once again, while as human beings we inherently want to prove ourselves, would you go into lab ready to spend 13 hours a day , because you want to outcompete your peers or colleagues , or because you’re excited to know the “truth”?

    [Reply]

    Srimathi Kasturirangan Reply:

    I would definitely work towards finding the “truth.” But though jif of a journal might not be an accurate way to validate an article, it still holds good that the supposed best journals have extremely high jif values! Are u saying these journals don’t publish articles about rare diseases and new signaling pathways?? All am saying is though we get distracted about where we publish and grant funding.. It can’t be eliminated, becoz we would want to publish in the best journal we can! And we need to obviously look at each result critically, but there are millions of papers, are u going to read each one and analyze each figure? Though that’s how its should be done ideally, we don’t live in an ideal world and don’t have time to read each one! But we should definitely focus more on the quality of research we put out, rather than where we publish!

    [Reply]

  • Wade Lee

    Thank you, Maisha, for this interesting article… I’m glad to see the IC publishing on the topic of scholarly communication and how it is measured and accessed. You make an excellent point that Impact Factors are a journal-level measurement, and don’t say as much about the individual articles published in the journals. (As I always tell researchers: your article may be the one that brought down the average if it wasn’t cited much.) Alternatively measures of an individual’s scholarly impact such as h-indexes (basically, the number at which you can say, “I have ‘h’ number of articles cited at least ‘h’ times) can be problematic for early career researchers since your h-index is by definition less than or equal to the number of papers published. And those measures are still calculated largely by counting the number of publications that cite your work, not by other metrics of impact on the scholarly conversation (retweets? downloads? etc.) that fall under the altmetrics label.

    In regard to Srimathi’s question of what measures of (relative) quality we can use to find the best papers, or how is our best research going to be discovered… I imagine that in some fields, being in the ‘right’ journal, with the ‘right’ readership could be more important than being in the ‘high impact’ journals that may be read outside your field. Also, as more and more papers are discovered ‘on their own’, independent of the journal they are published in, whether through disciplinary research databases or Google Scholar, the best research may stand independent of its journal ‘home’. Google Scholar, for example, factors the *paper’s* citation count into its relevancy rank and position in the search results, rather than the journals. And hopefully more open access publishing will get research to a larger audience…and not just the audience that can afford access to the highest impact journals (Science, Nature, PNAS, etc.).

    Wade Lee
    Science Research Librarian
    University of Toledo Libraries

    [Reply]