The Code of Misconduct

Scientific misconduct is more than falsification, fabrication, and plagiarism – and harder to identify. 

In 1998, Andrew Wakefield, MD, and co-authors investigated a cohort of 12 children with chronic digestive problems who were experiencing symptoms of regressive developmental disorder. All children, they reported, had a history of normal development and neurologic and psychological assessments revealed no abnormalities. After ruling out other contributing factors, Dr. Wakefield’s team concluded that both the gastrointestinal and behavioral symptoms were caused by an “environmental trigger”: administration of the measles, mumps, and rubella (MMR) vaccine.1

They published their results in The Lancet and, though small, the study was impactful. After media outlets picked up the research, parents seized upon the apparent connection between the onset of regressive autism and the routine vaccination.

Twelve years later, the journal retracted the article, citing an investigation by the British General Medical Council that found “several elements” of research misconduct. The patients included in the trial were carefully selected and some of the research was funded by lawyers acting on behalf of parents who were involved in lawsuits against vaccine manufacturers.2 Ultimately, the Council charged Dr. Wakefield with dozens of allegations of misconduct, from failing to report conflicts of interest to showing “callous disregard” for the children in his study by subjecting them to invasive tests.

But the damage was already done. Many parents chose not to vaccinate their children in hopes of reducing autism risk. Soon, outbreaks of measles and mumps started cropping up in the U.S., the U.K., and Canada, with a concurrent drop in the number of children receiving the MMR vaccine.

The saga of the autism-vaccination paper is one of the most visible examples of research misconduct. A Google search for the original The Lancet article yields several reports of its retraction, and the article’s webpage is watermarked with “RETRACTED” in large red letters. Dr. Wakefield’s paper also falls directly into the U.S.’s government-wide definition of research misconduct: fabrication, falsification, or plagiarism – FFP, for short – in proposing, performing, or reviewing research, or in reporting research results (SIDEBAR 1).3

Unfortunately, many cases of research misconduct are not black and white, according to Ivan Oransky, MD, distinguished writer in residence at New York University’s Carter Journalism Institute and co-founder of RetractionWatch.com. Most, he contended, fall into a gray area between honest error and intentional deceit.

“In the past several years, there has been more recognition that the [U.S. Department of Health and Human Services’] Office of Research Integrity’s (ORI’s) definition may be too tight and too restrictive,” Dr. Oransky told ASH Clinical News. “There are other questionable research practices that would not meet the criteria for misconduct, but that are clearly detrimental to the research record and to science.”

ASH Clinical News recently spoke with Dr. Oransky and several other experts in the field of ethics about the definition of misconduct and, once misconduct is identified, who is responsible for righting the wrong.

Defining Misconduct 

Cases of FFP fall at one extreme end of the spectrum of research misconduct, according to Jennifer A. Byrne, PhD, professor of molecular oncology in the discipline of child and adolescent health at The University of Sydney School of Medicine in Australia.

“Misconduct occurs when someone is doing something wrong and he or she is pretty confident that it is wrong,” Dr. Byrne said. “There are other cases where people do things that aren’t good scientific practice or could perhaps be classified as ‘honest error.’”

Honest error could include calculation errors, poor experimental design, or even some forms of plagiarism that are unintentional. For example, certain papers have been retracted when a well-respected researcher was found to have “self-plagiarized,” using content verbatim in one journal that was published previously in another journal.4

In another case, authors issued a retraction of a paper testing the expression of the erythropoietin gene in a human renal cell line after other laboratories testing the cell line discovered that it had been unknowingly cross-contaminated.5

Other examples of detrimental research practices include cherry-picking data, ignoring outlier values, hacking p values (or manipulating data analysis to find statistically significant patterns), designing clinical trials with certain results in mind, only publishing research with positive results, and manipulating statistics.

In fact, many retractions issued by journals originate from researchers themselves. For example, in 2012, the Journal of Clinical Oncology issued a retraction of a 2007 paper after the co-authors “identified several instances of misalignment of genomic and clinical outcome data.”6 (For examples of the most common types of research errors, see SIDEBAR 2).

How Common Is Misconduct? 

With research misconduct taking so many forms, its prevalence is hard to pin down. “Anybody who tells you they know the overall prevalence is being overly enthusiastic,” Dr. Oransky alleged.

However, some researchers have attempted to get a handle on research misconduct by determining its frequency. In a 2009 meta-analysis, Daniele Fanelli, PhD, compared results of 21 surveys asking scientists directly whether they or a colleague have committed research misconduct, finding that an average of 2 percent of scientists admitted to “fabricating, falsifying, or modifying data or results at least once.”7 Still, more than one-third of respondents admitted to engaging in other questionable research practices, such as “dropping data points based on a gut feeling” and “changing the design, methodology, or results of a study in response to pressures from a funding source.”

“Considering that these surveys ask sensitive questions and have other limitations, it appears likely that this is a conservative estimate of the true prevalence of scientific misconduct,” Dr. Fanelli noted.

In 2012, The BMJ published results of an electronic survey that included replies from about 3,000 U.K.-based authors and reviewers.8 The survey showed that 13 percent of respondents reported having witnessed or having firsthand knowledge of scientists or doctors inappropriately adjusting, excluding, altering, or fabricating data during their research or for purposes of publication.

Looking specifically at the U.S. federal definition of misconduct, the ORI confirmed approximately 200 cases of misconduct over 20 years, according to a report from Nicholas Steneck, PhD, a consultant to the ORI. Divided by the total number of researchers, Dr. Steneck wrote, it yielded a rate of one in 100,000 cases of research misconduct per year.9

“Overall, that frequency seems pretty rare,” said Michael Kalichman, PhD, founding director of the University of California San Diego’s Research Ethics Program. “The problem is that you don’t know what you don’t know. Those were only the cases that were discovered or the cases of people not smart enough to not get caught.”

Truth and Consequences 

When a scientist gets caught in the act of research misconduct, the type of punishment depends on the severity of the misconduct and the perpetrator’s track record.

If the misconduct qualifies as fraud under the legal definition, a scientist can be prosecuted for civil or criminal fraud, though it is rare for people to be threatened with jail time for this type of offense, according to David B. Resnik, JD, PhD, a bioethicist and Institutional Review Board Chair at the National Institute for Environmental Health Sciences at the National Institutes of Health (NIH).

In cases of civil fraud, the offending party would have to pay back the money he or she fraudulently received from the government. If found guilty of criminal fraud, prison time is possible.

In one of the rare cases of scientific misconduct leading to jail sentencing, Eric Poehlman, PhD, a former research professor at the University of Vermont (UVM) College of Medicine, pled guilty to falsifying and fabricating research data on obesity, menopause, and aging in numerous federal grant applications and academic articles between 1992 and 2002.10 Dr. Poehlman had secured $2.9 million in NIH funding during his tenure at UVM. In addition to paying nearly $200,000 to settle a civil complaint with the institution, he was sentenced to one year and one day in prison, with two years of probation.

Federal agencies also may impose a kind of “sanction” on researchers found guilty of misconduct, which can include banning them from receiving any federal funding for a certain time, Dr. Resnik said.

Lying on a federal grant application is a crime punishable by up to five years in federal prison, but the punishment for committing FFP in a published journal article at the discretion of the journal. Typically, the journal that published the fraudulent work will issue a retraction.

The consequences of these retractions vary, as well. In addition to “seeing a decline in citations for the author,” Dr. Oransky said, “we could see a decline in citations for the whole specialty.”

One study found that, after a retraction is issued, “ordinary authors [within that specialty] experience large citation losses to their prior work.”11 However, “eminent” authors experienced little loss in citation frequency.

“[These results show that] reputation is important,” he said, adding that “if there is a retraction for an honest error, there is no decline in citations. That is good news.”

Journals also might impose publishing sanctions on researchers who commit scientific misconduct, and the scientists’ home institution or company may impose consequences on the individuals, ranging from increasing supervision of their research practices to placing a letter in a permanent file detailing the misconduct. In more severe cases, researchers can be fired.

Blowing the Whistle 

Low rates of scientific misconduct could be attributed to the difficulty of reporting and investigating such allegations. Often, researchers may “self-retract” when they identify honest errors in their own research, or a reader or fellow researcher may contact a journal to allege misconduct. In other cases, issues of scientific misconduct may be raised within an institution where the researcher works, and then journals are contacted later.

“Journals have an obligation to try to look into every allegation they receive, whether or not they believe it to be true,” Dr. Kalichman said. “The problem with that ideal is that the journal’s office may be located in Washington, and the institution in Oregon, and it is unlikely they have the resources to investigate fully.”

Some technological advances are making misconduct allegations less challenging to investigate though, Dr. Byrne said. For example, researchers at Harvard University are working with the scientific publisher Elsevier to develop technology that would detect manipulated or misused images – one of the most common types of misconduct.12 One study estimated that about 4 percent of published papers pulled from 40 scientific journals contained “problematic figures,” defined as figures that were inappropriately duplicated or altered.13 Additional papers from authors found to have used problematic figures were at an increased likelihood for also containing problematic images.

Other programs are designed to scan article text to detect plagiarism. Plagiarism is not always an exact copy of previously written text, though, so software programs search for word frequencies and distribution of text across the whole submission. “In particular, journals might look for papers that have a high degree of similarities, outside of things like confidence intervals,” Dr. Byrne said.

New programs also are recalculating statistical values to identify people who have rounded down p values to achieve statistical significance.

“Overall though, wide application of these programs could have huge downstream consequences, particularly in the short term,” Dr. Byrne admitted. “It creates a situation where, if a publisher uses this software and finds a large number of questionable articles, what does the publisher do with them? We can use technology to create a high-throughput screening system, but each queried paper then needs to be individually assessed, and that takes time.”

Even if journals investigate every allegation of misconduct, “bad apples” will always be able to get through even the most stringent peer-review processes.

Dr. Resnik compared the situation to looking at a beautiful painting of a mountain. The artist may say that the mountain exists somewhere, but all a reviewer can do is tell whether the tree looks like a tree, or the mountain looks like a mountain.

“With peer-review, you are reviewing a summary, or a rendering, or what researchers say they did,” Dr. Resnik said. “They could have made it up.”

Blood is one of the most authoritative scientific journals in the biomedical field, and, for us, it is absolutely essential to publish what is right,” explained Blood Editor-in-Chief Bob Löwenberg, MD, PhD. As a peer-reviewed journal, though, “all we can do is look at the data and ask, ‘Are they consistent?’ These things are difficult to find when reviewers and editors receive a manuscript.” For more about how Blood prevents and handles scientific misconduct, see SIDEBAR 3.

Some editors argue that punishing individual scientists is outside of a journal’s jurisdiction. In a recent case of a retracted manuscript published in Nature Plants, Chief Editor Chris Surridge, PhD, defended the peer-reviewed journal’s role: “Decisions about publication of research are made on the basis of the research submitted and the peer reviews of that research. … It is not our role to investigate scientific misconduct or determine appropriate sanctions. … Our role is to ensure that the studies that are submitted to us and which are ultimately published are as accurate and reliable as possible irrespective of who the authors are.”14

Regarding the same case, Committee on Publication Ethics (COPE) secretary and interim treasurer Charon Pierson, PhD, noted that COPE does not support the practice of temporarily banning authors guilty of misconduct. “I would say that the only responsibility of the journal is to scrutinize manuscripts,” she said. “To deal with the scientists themselves – that’s the realm of the institutions, the laboratories, the funding agencies, the governments, all of those pieces of the puzzle.”14

Dr. Byrne agreed, but added that one of the inherent problems with identifying misconduct is that if someone is knowingly doing something wrong, that person is going to cover his or her tracks. “True misconduct, such as data falsification or manipulation, can be hard to detect because it is hidden,” she said. “It’s often easier for peer reviewers to find honest mistakes.”

Stopping Misconduct 

If journals cannot be expected to carry the burden of identifying and investigating all cases of possible research misconduct, who should be responsible for it?

The interviewed experts all agreed: Everyone involved.

“Scientists can say it’s the journals’ fault and journals can say it’s the institutions’ fault, and institutions can say it’s the NIH’s fault … let’s stop blaming and just say that everyone is responsible,” Dr. Oransky asserted.

And, if everyone is part of the problem, then everyone must be part of the solution.

To aid in deterring or identifying misconduct, Dr. Byrne advised that researchers’ employers should practice good supervision of their staff and stay vigilant for signs of possible misconduct. Red flags might include employees working outside of normal working hours, scientists coming and going a lot, or working only on weekends, she said.

“Good, strong supervision, open lines of communication, and insistence on reviewing primary data are all important parts of ensuring research integrity,” Dr. Byrne added.

Dr. Resnik emphasized the importance of strong mentoring programs. “Mentoring is a very important part of education and training. Mentors can promote good science and model good behaviors.” He also called for the scientific community to provide adequate protection for whistleblowers. If people want to report misconduct, they must be protected from possible retaliation.

Rehabbing Research 

The potential for people to “come back” from allegations of misconduct or retractions of scientific papers depends on the severity of the case.

“I imagine that there are a lot of cases of misconduct that don’t even make it out of the institution or are handled under the table,” Dr. Resnik surmised. “If it is caught early and someone is sanctioned internally, maybe that person can go on to have a good career.”

Allegations of misconduct can be career-killers, too, because they often lead to termination and sanctions. In a 2014 analysis of the financial costs and personal consequences of research misconduct, authors looked at 291 articles retracted (mostly for falsification or fabrication) over 20 years.15 These retracted papers accounted for about $58 million in direct funding from the NIH – or less than 1 percent of the NIH budget during the same 20-year period – and each article accounted for an average of just under $400,000 in direct costs. After investigators had their papers retracted, their median numbers of annual publications dropped from 2.9 to 0.25, representing a 91.8-percent decrease in publication output.

Three researchers who were charged with misconduct by the ORI (and hence barred from receiving federal funds for a period of 3 to 5 years) detailed the longterm effects of these charges – particularly in the digital age, in The Scientist.16 Any time the ORI formally rules on misconduct, the information is published on the internet, so, even if a person’s debarment from federal funding was lifted more than a decade ago, the description of the ORI’s case and the penalty the investigator received will show up in an internet search. The investigators claimed that the penalties have cast decades-long shadows over their careers and funding prospects.

However, tools exist for researchers who have committed misconduct and want to find a path toward redemption.

The ORI launched its RePAIR (Restoring Professionalism and Integrity in Research) program to provide “intensive professional development education for investigators who have engaged in wrongdoing or unprofessional behavior, including persistent non-compliance.”17 Participants in this program will attend several days of intense intervention at a neutral site, followed by lengthy period of monitoring back at their home institution. The premise of RePAIR is that these interventions will “rebuild [investigators’] ethical views” and turn them back into responsible citizens of the research community.

Washington University in St. Louis houses the NIH-funded Restoring Professionalism and Integrity in Research Program, which offers “personalized assessments, a group workshop, and post-workshop coaching calls to help researchers operate professionally in today’s complex environments.” Among potential candidates for the program are those who “have been investigated for noncompliance or misconduct [who] wish to move forward constructively.”18

On the whole though, just like many cases of misconduct, whether or not a researcher should be accepted back into the research community is often not a straightforward issue.

“One lesson we should all learn is that better education and training on good scientific practices is needed,” Dr. Resnik said. “We don’t want anyone to commit misconduct because of ignorance.”
—By Leah Lawrence

References

  1. Wakefield AJ, Murch SH, Anthony A, et al. RETRACTED: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. Lancet. 1998;351:637-41.
  2. NHS Choices. Ruling on doctor in MMR scare. Accessed May 22, 2018, from https://www.nhs.uk/news/medical-practice/ruling-on-doctor-in-mmr-scare/.
  3. The Office of Research Integrity. Definition of research misconduct. Accessed May 22, 2018, from https://ori.hhs.gov/definition-misconduct.
  4. Chabner BA. Self-plagiarism. Oncologist. 2011;16:1347-8.
  5. Retraction of Rini BI. VEGF-targeted therapy in metastatic renal cell carcinoma. Oncologist. 2011;16:1481.
  6. Retraction of Dressman HK, Berchuck A, Chan G, et al. An integrated genomic-based approach to individualized treatment of patients with advanced-stage ovarian cancer. J Clin Oncol. 2012;30:678.
  7. Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One. 2009;4:e5738.
  8. Schroter S, Godlee F, Wager E, Green M. BMJ’s research misconduct survey. Accessed May 22, 2018, from https://blogs. bmj.com/bmj/files/2012/01/BMJ-research-misconduct-survey-for-posting-on-bmj.com_.pdf.
  9. Steneck NH. Assessing the integrity of publicly funded research: a background report for the November 2000 ORI Research Conference on Research Integrity. Accessed May 22, 2018, from https://ori.hhs.gov/sites/default/files/assessing_int_res.pdf.
  10. The New York Times Magazine. An unwelcome discovery. Accessed May 22, 2018, from https://www.nytimes.com/2006/10/22/magazine/22sciencefraud.html.
  11. Jin GZ, Jones B, Lu SF, Uzzi B. The reverse Matthew effect: catastrophe and consequence in scientific teams. NBER Working Paper No. 19489. Accessed May 22, 2018, from http://www. nber.org/papers/w19489.
  12. Elsevier. At Harvard, developing software to spot misused images in science. Accessed May 22, 2018, from https://www.elsevier.com/connect/at-harvard-developing-software-to-spot-misused-images-in-science.
  13. Bik EM, Casadevall A, Fang FC. The prevalence of inappropriate image duplication in biomedical research publications. mBio. 2016;7:e00809-16.
  14. The Scientist. How journals treat papers from researchers who committed misconduct. Accessed May 23, 2018, from https://www.the-scientist.com/news-analysis/how-journals-treat-papers-from-researchers-who-committed-misconduct-31053.
  15. Stern AM, Casadevall A, Steen RG, Fang FC. Financial costs and personal consequences of research misconduct resulting in retracted publications. eLife. 2014;3:e02956.
  16. The Scientist. Life after fraud. Accessed May 22, 2018, from https://www.the-scientist.com/uncategorized/life-after-fraud-44032.
  17. The Office of Research Integrity. RePAIR program provides solution to redeem researchers. Accessed May 23, 2018, from https://ori.hhs.gov/blog/repair-program-provides-solution-redeem-researchers.
  18. P.I. Program. Helping researchers become more effective professionals. Accessed May 23, 2018, from http://integrityprogram.org/.

According to the U.S. Department of Health and Human Services’ Office of Research Integrity (ORI), “research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.”

This definition includes the following types of misconduct:

  • Fabrication: making up data or results and recording or reporting them
  • Falsification: manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record
  • Plagiarism: the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit

The ORI also notes that research misconduct does not include honest error or differences of opinion.

Source: The Office of Research Integrity. Definition of research misconduct. Accessed May 22, 2018, from https://ori.hhs.gov/ definition-misconduct.

Mistakes in peer-reviewed papers are “easy to find but hard to fix,” according to David B. Allison, PhD, from the department of biostatistics at the University of Alabama’s School of Public Health. In an article published in Nature, Dr. Allison and co-authors analyzed dozens of peer-reviewed studies published in the field of obesity research to identify errors or miscalculations that sully the scientific record.

While some articles described mathematically or physiologically impossible results, the authors also identified three common “substantial and invalidating” errors:

  • Mistaken design or analysis of cluster-randomized trials: In these studies, all participants in a cluster are given the same treatment. The number of clusters (not just the number of individuals) must be incorporated into the analysis. Otherwise, associations or differences reported in results sections may be found, falsely, to be statistically significant. Designs with only one cluster per treatment are not valid as randomized experiments, regardless of how many individuals are included.
  • Miscalculation in meta-analyses: Effect sizes are often miscalculated when metaanalysts are confronted with incomplete information and do not adapt appropriately. Different study designs and meta-analyses require different approaches. Incorrect or inconsistent choices can change effect sizes, study weighting, or the overall conclusions.
  • Inappropriate baseline comparisons: Rather than comparing “differences in nominal significance” (the DINS error), differences between groups must be compared directly. For studies comparing two equal-sized groups, the DINS error can inflate the false-positive rate from 5 percent to as much as 50 percent.

Source: Allison DB, Brown AW, George BJ, Kaiser KA. Reproducibility: a tragedy of errors. Nature. 2016;530:27-9.

While most research is conducted and reported responsibly, Dr. Löwenberg said, mistakes and misconduct can take place. He outlined how Blood responds to cases of FFP and unintentional errors in the research it publishes:

  • Prevention: When authors submit a manuscript, they sign a statement claiming full responsibility for the content, and that the research has been conducted correctly. Before any paper is published, it is reviewed through a software program to check for plagiarism and Blood editorial staff check images for signs of manipulation. The same process is undertaken for submissions to Blood Advances.
  • Responding to allegations: If there is a signal of scientific misconduct – from the software and staff review, the authors, or a whistleblower – we take it seriously. Because we do not have access to the raw data or the laboratory books, our first step is going back to the author with our questions. If their responses aren’t satisfactory, we will bring our concerns to the authors’ home institution, which will conduct its own investigation. We also have a staff member on Blood – a specialist in scientific integrity – who helps us determine whether the errors were the result of intentional misconduct, and then provides advice regarding appropriate actions based on current best practices in scientific publishing.
  • Penalties for scientific misconduct: If an institutional investigation finds that scientific misconduct has occurred, the institution typically contacts us to recommend a retraction. We will issue a retraction with an explanatory statement. If the errors are determined to be unintentional and do not invalidate the conclusions of the paper, or if the authors have contacted us about relatively minor errors they identified in their own research, we will issue a correction.

SHARE