Show Me the Money? Show Us the Data!

How will NIH’s new requirement that federally funded clinical trials report outcomes affect researchers?

The merits and cautions of data sharing have long been a topic of lively debate in the research community, but in the case of federally funded clinical trials, data sharing will no longer be optional.

In September 2016, the U.S. Department of Health and Human Services (HHS) and the National Institutes of Health (NIH) issued new requirements for registering and reporting results from clinical trials of drug and biologic products on, including reporting summary results to the public.1,2

The new NIH policy, which went into effect January 18, 2017, applies to all trials funded by NIH – including phase I studies and clinical trials of behavioral interventions and products not regulated by the U.S. Food and Drug Administration (FDA).

With these regulations, HHS and NIH reinforced their commitment to improving clinical research practice and trial design, expanding the evidence base that informs clinical care, building public trust of clinical research, and increasing the number of patients who participate in trials.1,2

“We want to make sure that everyone – the patient and participant community, the scientists – has the maximal access to information about the trials and, ultimately, the results,” Carrie D. Wolinetz, PhD, associate director for science policy and director of the Office of Science Policy at NIH, told ASH Clinical News. Offering this information to the public is a way to “maximize the value of our investment in these trials,” she added.

“As a principle, the public sharing of clinical trial results is a great thing that serves the public good,” agreed Neal J. Meropol, MD, associate director for clinical research at the Case Comprehensive Cancer Center at Case Western Reserve University. “Over the years, many clinical trials are conducted, but their results are never published. It’s a waste of money and effort, and it is disrespectful to the volunteers who participate in clinical trials.”

Although many in the research community have praised the new policy as a step forward for research and innovation, investigators who spoke with ASH Clinical News acknowledged that instituting the requirements will yield a new set of challenges and add to the administrative burden already facing investigators.

Reforming Clinical Trial Practice

NIH has long been a proponent of open data sharing, but this new policy expands the scope of its existing data-sharing policy, which went into effect in 2003.3 The original policy was much broader and required that researchers submit a data-sharing plan only if they were applying for more than $500,000 in funding during a single year.

“The 2003 rule was only applicable to very large awards and was fairly minimal in terms of the stipulated details,” Dr. Wolinetz said. The new policy provides more details on the how and when of data-sharing compliance.

Now, for any research effort that falls under NIH’s definition of a “clinical trial” and is either wholly or partially funded by NIH, investigators will be asked to submit a data-sharing plan that outlines how they will disseminate and share the study’s outcomes. Trials must be registered on within 21 days of enrolling the first participant; then, after the trial is underway, investigators will be required to publish the results on within 12 months of collecting the last data point – regardless of whether the trial resulted in a positive or negative outcome.

The new regulation also has teeth: “NIH will withhold clinical trial funding to grantee institutions if the agency is unable to verify adequate registration and results reporting from all trials funded at that institution.”1

If researchers fail to meet the demands of the new policy or regulation, they could face significant consequences. Those who do not comply with the registration and reporting rules, or provide false or misleading clinical trial information, will receive a warning and an opportunity to remedy the non-compliance within 30 days. Failure to do so will result in the researcher being subject to an additional civil monetary penalty of “not more than $10,000 for each day of the violation” until the violation is corrected.1

Falling short of the requirements could also hinder future efforts to secure funding, Dr. Wolinetz said. “If a non-compliant institution applied for funding, we could have to consider whether or not it would be appropriate to give it another award for a clinical trial.”

The data-sharing policy is just one recent action by NIH to reform clinical trial practice. The agency also decided that it will not accept unsolicited clinical trial applications; instead, a new policy requires that all clinical trial applications be submitted in response to a specific Funding Opportunity Announcement. Additionally, on June 21, 2016, NIH issued a policy that calls for the use of a single Institutional Review Board (IRB) for multisite trials to eliminate duplicative review and to streamline the IRB review process. “The shift in workload away from conducting redundant reviews is expected to allow IRBs to concentrate more time and attention on the review of single-site protocols, thereby enhancing research oversight,” according to the text of the final rule.4

“Our hope is that, taken together, all of these things will ultimately improve the way clinical trials are conducted – from conception through funding through conduct,” Dr. Wolinetz said.

Shining a Light on Undiscovered Data

In a recent viewpoint article published in JAMA announcing the new policy, NIH policymakers outlined several benefits of improving access to information about clinical trials, including physicians, patients, and family members being able to more easily find relevant information.5

“It is possible that increased registration of clinical trials could aid recruitment, reducing the number of trials that fail because they do not meet their enrollment targets and thus do not have the statistical power to give meaningful results,” wrote Kathy L. Hudson, PhD, deputy director for science, outreach, and policy at NIH, and lead author of the article.

“Although the process of enhancing the clinical trial pipeline may be a work in progress, the goal of these varied activities remains constant: to maintain public trust and to encourage advances in the design, conduct, and oversight of clinical trials,” Dr. Hudson and co-authors wrote in JAMA.5 “These innovations are intended to help NIH better fulfill its mission of supporting scientific discovery to improve human health, while elevating the entire biomedical research enterprise to a new level of transparency and accountability.”

The policy also addresses one of the biggest concerns with clinical trials today: Many results never see the light of day – leaving gaps in the knowledge base of researchers, clinicians, and patients.

For instance, a 2012 study looked at NIH-funded trials registered on between 2005 and 2008 and found that just 46 percent of the trials’ results had been published in a peer-reviewed biomedical journal within 30 months of completion.6 By 51 months, that number had climbed to 68 percent, but that still leaves nearly one-third of results unpublished.

In 2015, a STAT investigation identified multiple prestigious research organizations that routinely failed to report trial results, violating a 2008 law issued by the FDA.7 The reporters found that four of the top 10 recipients of federal medical research funds from NIH – including Stanford University, the University of Pennsylvania, the University of Pittsburgh, and the University of California, San Diego – either never disclosed trial results or did so late 95 percent of the time.

Grzegorz Nowakowski, MD, associate professor of oncology and medicine in the Division of Hematology at the Mayo Clinic, said one reason results are underreported is the bias to report only positive trial outcomes. “We often perceive negative trials as a ‘failure,’ and we tend to shy away from those, but negative clinical trials are actually very informative,” he contended. “[Negative trials] may not change clinical practice, but they do inform the science quite a bit.”

Jorge Cortes, MD, deputy chair of the Department of Leukemia at the MD Anderson Cancer Center and a Jane and John Justin Distinguished Chair in Leukemia Research, agreed that even negative studies, or those in which a hypothesis was not validated, were initially conducted for a reason.

“We thought there was some good preclinical work, there was some important background that justified it to be done, and sharing that information will help the next studies,” he said. “It will help the development of preclinical data, new studies, and new drugs or interventions of a similar nature.”

Failing to report negative outcomes, Dr. Nowakowski added, hinders clinical research and can have serious implications for clinical practice. He noted that it is common for eager clinicians to put promising outcomes from phase II studies into practice before more robust phase III trials are conducted. If the phase III trial fails to support the use of the drug regimen or intervention but those results are never reported, clinicians could be using ineffective – or, in some cases, dangerous – therapies.

In the STAT investigation, reporters found that investigators failed to report data for two trials of the experimental anticancer drug ganetespib, even though “the results of tests involving breast and colorectal cancer patients showed serious adverse effects in 13 of 37 volunteers,” including one death.7

“Particularly in the field of lymphoma, several large phase III trials were presented with negative results, and many of the control arms or experimental arms in those trials included interventions that were already being used in clinical practice based on encouraging phase II results,” Dr. Nowakowski said.

A Bigger Burden to Bear?

The benefits of the policy seem apparent and abundant, but putting it into place will not be easy.

The new timelines are the biggest source of concern for investigators on federally funded clinical trials. With just 12 months between collecting their last endpoint and reporting their results, some researchers believe that this aggressive timeframe will require them to rework their typical post-trial analysis.

“It’s going to require some more effort and will put some strain on our systems, but it’s something that we need to adapt to,” Dr. Nowakowski said. “The field is changing. The pace of progress is faster than ever, and we need to stay on top of it.”

To meet the timeline, he explained, investigators will need additional support from fellow investigators and data analysts to prepare and report findings.

But, Dr. Cortes noted, certain aspects of trial analysis are out of the investigators’ control, and these external factors could make it difficult to meet the 12-month mark. “From the last data point generated, many things have to occur before we are able to publish data in a reasonable, clean, clear, and accurate way,” he said.

This could be particularly challenging for trials that necessitate a greater level of coordination among involved parties, such as multicenter studies or studies conducted in conjunction with the pharmaceutical industry.

“I am involved with a number of studies with pharmaceutical industry partners, but I could not publish the results because I only have my own patients’ data,” he said. “Taken alone, these results aren’t meaningful.” However, Dr. Cortes agreed that having some semblance of a deadline is important to keep investigators accountable for sharing results.

Dr. Wolinetz said NIH arrived at the 12-month deadline after considering public comments received during the HHS’ rule-making process. The formal rule allows for a delay in reporting of up to 2 years if a certification is submitted in the case that either “an unapproved, unlicensed, or uncleared product studied in the trial is still under development by the manufacturer or that approval will be sought within one year after the primary completion date of the trial for a new use of an approved, licensed, or cleared product.”

Extensions could also be granted if there is “good cause” for the delay, and investigators could apply for a permanent waiver in extraordinary circumstances.

“We want to strike the balance between wanting to get the information out there in a reasonable amount of time, and recognizing – particularly for industry sponsors of clinical trials – that certain business practices could be problematic to that timeline,” Dr. Wolinetz said.

Although Drs. Nowakowski and Cortes believe that researchers will ultimately be able to meet the tighter timeline, they say one of the biggest challenges will come before a trial is under way: Compiling the data-sharing plan.

“There is no question that this new facet adds to the administrative burden and, unfortunately, everything we do now is increasingly becoming more burdensome,” Dr. Cortes said.

According to Dr. Nowakowski, the data-sharing plan is the aspect of the new rule that investigators are “least enthusiastic about.” Many researchers, he added, felt that the requirement to share clinical findings within the designated timeframe was sufficient to improve the value of clinical trials. Many variables go into the planning, development, and execution of a clinical trial, he noted, and those can be difficult to predict at the start of a trial.

That lack of enthusiasm is caused, in part, by uncertainty about what exactly NIH will require of data-sharing plans. Dr. Wolinetz said NIH is working to develop an organization-wide data-sharing policy that will expand on current guidance for data-sharing plans and outline what types of databases to use, which metadata to attach, in what format the data should be presented, and how long the data will be available.

“We are at the point now where we are trying to be very thoughtful about how we take this data-sharing plan and move into the next phase of specific policy,” she said.

Dr. Meropol said that deciding how to communicate outcomes data and determining which information would be most valuable to other stakeholders will also be challenging.

“The language used for communicating study implications with a lay audience is much different than the language used to communicate results with a scientific audience,” he said. “There is a fine line between a dispassionate summary of results and marketing. This factor needs to be carefully considered and monitored.”

Great Expectations

For larger research institutions that were already subject to NIH’s 2003 data-sharing policy, there may not be substantial changes to existing practices, but for researchers applying for smaller amounts of funding, adapting to the new policies will require a significant shift in practice.

The new ruling has expanded the requirements somewhat, but not dramatically,” Dr. Meropol said, adding that his cancer center has been providing data-sharing plans and sharing trial outcomes for several years. “For many sites, this will be a wake-up call.”

The researchers who spoke with ASH Clinical News agreed that a broad educational effort could help prepare investigators to meet the new expectations; it could involve continuing medical education activities or online resources or, as Dr. Cortes suggested, a simple, straightforward outline of what investigators need to do at each step of the process, from grant application to final reporting.

The education extends to NIH staff, according to the viewpoint article published in JAMA. “As a crucial first step, NIH will require Good Clinical Practice (GCP) training for investigators and NIH staff responsible for conducting or overseeing clinical trials,” Dr. Hudson and co-authors wrote. “The aim is to help ensure that all involved in the clinical trial enterprise have the appropriate knowledge about the design, conduct, monitoring, recording, analysis, and reporting of clinical trials. While GCP training on its own may not be sufficient, it provides a consistent and high-quality standard.”5

Dr. Wolinetz pointed to several instructional webinars available on to guide investigators through the HHS final rule, complementary NIH policy, and what it all means for their research (see SIDEBAR for more information).

Researchers can also learn from each other, Dr. Meropol added, by reporting best practices and strategies for complying with the regulations in the most efficient manner – and in the way that is most valuable to the public.

Investigators are not the only ones who could stand to benefit from this extra layer of education. Informing academic centers of the new obligations could help research institutions ensure that investigators are given enough flexibility and time to meet the new reporting requirements, according to Dr. Nowakowski.

A Step Forward

Regardless of the obstacles to implementation and the steep learning curve many investigators may face, the recent actions to promote widespread data sharing represent a move forward for science.

Similar practices are being adopted across the world. For instance, in the Netherlands, although there is no formal government obligation yet, several of the funding agencies supported by the government have made data-monitoring plans, including data sharing, a compulsory requirement to secure funding. The European Union also recently expanded the scope of its Horizon2020 program to include research in all areas covered by the program to promote data and knowledge integration. Researchers can opt out at any stage if there is a reason not to comply, but those who opt in must submit a data-management plan similar to the data-sharing plans outlined in the new NIH policy.8

In the United States, some are hoping that NIH’s decision to create a specific data-sharing policy for all NIH-funded clinical trials will set the tone for the research community. “I wish there were a mandate like that for every study – not just NIH-funded studies,” Dr. Cortes said.

When investigators are forthright about the trials they have performed and the outcomes they have observed, researchers contend, everyone benefits.

“This transparency allows people to see what others are doing and what they have found,” Dr. Wolinetz said. “It helps refine the design of future experiments, prevents redundant experiments, and helps improve reproducibility and vigor.” —By Jill Sederstrom


  1. Department of Health and Human Services. 42 CFR Part 11. Final Rule. Accessed April 5, 2017, from
  2. National Institutes of Health. NIH Data Sharing Policy. Accessed April 5, 2017, from
  3. National Institutes of Health. Final NIH Statement on Sharing Research Data. Accessed April 5, 2017, from
  4. National Institutes of Health. Final NIH Policy on the Use of a Single Institutional Review Board for Multi-Site Research. Accessed April 5, 2017, from
  5. Hudson KL, Lauer MS, Collins FS. Toward a new era of trust and transparency in clinical trials. JAMA. 2016;316:1353-4.
  6. Ross JS, Tse T, Zarin DA, et al. Publication of NIH funded trials registered in across sectional analysis. BMJ. 2012;344:d7292.
  7. Piller C. Law ignored, patients at risk. STAT. Accessed April 5, 2017, from
  8. European Commission. H2020 Programme: Guidelines on FAIR Data Management in Horizon 2020, July 26, 2016. Accessed April 6, 2017, from

For more details on the new data-sharing requirements for NIH-funded research, and other changes implemented in the new NIH policy, visit

NIH’s National Library of Medicine, which maintains, has also produced a series of training webinars to introduce investigators to the key provisions of the rule. Visit to view the presentation and associated materials.