Science

Suspicious phrases in peer reviews point to referees gaming the system


When University of Seville researcher Maria Ángeles Oviedo‑García began to look at the peer reviews some journals publish alongside their papers, she was surprised to see the same vague, generic phrases kept turning up.

“In abstract, the author should add more scientific findings.” “Discuss the novelty and clear application of the work in the abstract as well as in introduction section.”

She ultimately identified 263 suspicious reviews prepared for 37 journals in multiple disciplines between 2021 and this year. One reviewer used duplicated phrases in 56 reviews, she reported last month in Scientometrics.

It’s an unusually detailed analysis of a little-noticed scheme that may be allowing some researchers to reap undeserved benefits for boilerplate or downright manipulative reviews. The practice may also be compromising the integrity of the scientific literature. “Some other researchers will probably base their future research on those fake-reviewed papers, and it’s scary,” especially for ones about health and medicine, says Oviedo-García, who primarily studies marketing and tourism.

Oviedo-García and other research integrity experts suspect the reviewers worked off a template to quickly crank out reports. They could then take credit for the work on their CVs to gain a boost in professional evaluations. Some may have additional self-interest: Several reviewers asked the author to include citations to their own papers, and some authors complied.

The reviews Oviedo-García analyzed appeared almost exclusively in journals from MDPI, which publishes reviews alongside many of its papers if the author agrees. (Reviewers are named if they consent.) The privately held company based in Switzerland publishes all its articles open access, charging authors a fee and promising prompt publication. Like other open-access publishers, MDPI offers reviewers a credit toward future publishing fees, which Oviedo-García says creates an additional incentive for writing numerous peer reviews as quickly as possible.

MDPI has begun to investigate, it said in a statement, starting with 84 papers Oviedo-García first noted in a blog post in January. So far, it found 32 papers require postpublication re-review and 37 met quality standards. MDPI also said it contacted the 10 reviewers who had evaluated the papers, and their institutions, to “communicate our concerns directly. … Any future collaboration with these reviewers will be closely monitored.”

The reviewer who authored the 56 reviews disagrees that they lack substance. “Any suggestions I make are done with the belief that they are relevant and valuable to the authors’ research,” mechanical engineer Moustafa Mahmoud Yousry Zaghloul of the University of Queensland told Science in an email. He says MDPI asked him to follow a structured format to make the reviews more consistent and clear; as a result, “It is possible that certain standard phrases or comments may appear in some reviews,” he says. “However, this is not an indication of a lack of effort or care in the reviews.” MDPI has not retracted the papers he reviewed. (Oviedo-García says she is not an expert in the subject matter covered by his reviews, or most of the others.)

Other scientific-integrity sleuths have previously spotted possibly compromised reviews. About 1% of 67,000 papers from Sage Publishing had reviews containing duplicated passages, according to a 2022 study by data scientist Adam Day. (He conducted the research as a Sage employee but now runs Clear Skies, which develops software to detect fraud in scientific manuscripts.) Day also used duplicated text as a signal of possible fake peer reviews but says many more likely go undetected.

A separate study of MDPI papers, posted this year on the HAL archive, found a prevalence of 0.1%. That is not reassuring, says co-author Guillaume Cabanac, an information scientist at Paul Sabatier University. “In science, we expect 0%, like for the transportation system. If one plane a day crashes, is it OK?”

The template-based reviews come on top of other abuses, including cases in which authors of submitted manuscripts suggest fictitious reviewers with email addresses controlled by the authors, allowing them to review their own papers. That problem, first noted more than a decade ago, appears to be subsiding, says Ivan Oransky, co-founder of the Retraction Watch scientific-integrity blog. Reviews written by artificial intelligence software are a newer challenge, although their frequency has proved hard to measure. Retractions because of compromised peer reviews are up, spiking to 2620 in 2021, up from 898 the year before, according to a Science review of Retraction Watch’s database.

MDPI says that since 2022 it has ramped up measures to detect fake peer reviews. These include software that can flag templates used by peer reviewers and instances when they recommend their own papers be cited. In 2023, it rejected more than 30,000 reviews “due to insufficient quality and or compliance with MDPI’s reviewer guidelines,” the company’s statement says.

Worries about possibly faked peer reviews should be taken seriously, says Kirsten Bell, a social anthropologist at Imperial College London, but also need to be considered in perspective. Many authors have complained that reviews regarded as legitimate are perfunctory and off-point; some question whether the reviewer even read the manuscript. “How is a fake peer review different from a poor one?” Bell and colleagues asked in a 2022 article in Science, Technology, & Human Values. “There’s almost a fetishization of [conventional] peer review in these discussions of compromised peer reviews,” she adds, “which forgets that at the end of the day, it is fundamentally flawed.”

A simple remedy for compromised peer reviews is not obvious or easy, Bell contends. It is difficult to automate detecting them, and even some subject-matter experts struggle to identify them. She questions the benefit of prioritizing them over other malfeasance in peer-reviewed papers, such as fabricated data, which she says may be more prevalent and harm the scholarly literature worse. Improving peer review broadly, Bell says, will require addressing systemic problems including professional incentives for scientists to publish more and more journal articles, a shortage of reviewers to evaluate them, and promises by some open-access publishers to review manuscripts quickly.

This post was originally published on this site

0 views
bookmark icon