close
close

AI makes real-world data more complex in FDA drug approvals

Doctor reports, medical images and laboratory test results from electronic health records are increasingly finding their way into the FDA's approval or post-approval processes for drugs and biologics – thanks in large part to artificial intelligence.

“There is a tremendous need for high-quality real-world data and evidence to support product applications,” said Nicholaas Honig, head of regulatory and legal at Aetion. “There is more and more of it and sponsors are really thinking about how this can support approvals in a variety of ways.”

Real-world data, as described by the Food and Drug Administration, refers to patient health or medical care that is routinely collected from a variety of sources. While it is not a substitute for randomized clinical trials – the FDA's gold standard for determining a product's safety and effectiveness – it is considered valuable because it can provide insight into other research questions that may not be answered using the traditional approach.

In July, the FDA finalized guidance on the use of EHRs and medical claims data in clinical trials to support drug and biologics approval decision-making. Notably, the guidance considers artificial intelligence in the collection and analysis of such data, as more drug sponsors and the companies they work with are moving toward using advanced technology tools.

Life sciences lawyers say AI will prove to be a double-edged sword, as it can extract important health information from real-time data for studies used in the regulatory process or post-approval, while also allowing for more rigorous scrutiny of the product application during FDA review.

“The FDA wants to make sure that AI is not just a black box that people rely on,” said Sonia Nath, chair of the global life sciences and healthcare regulatory practice group at Cooley LLP.

“There is a lot of real-world data out there, and as AI becomes more widely used in healthcare, there will be even more access to it,” Nath said.

By the end of the year, the agency is expected to release guidelines on the use of AI and machine learning to aid drug development, which some lawyers say will shed light on how the FDA thinks about the use of this technology.

No “black box”

As required by the 21st Century Cures Act, FDA's July guidance outlined several expectations for drug and biologics sponsors to ensure the integrity of the real-world data collected and the processes used to evaluate them.

The FDA also recognized that AI “could enable faster processing of unstructured electronic health data.”

This is because EHRs are often unstructured when captured, with the source information coming from physician notes and lab tests. Medical claims data—information submitted to insurers to obtain payment for treatments—can also be unstructured when captured. Drug sponsors typically work with companies or vendors that capture EHRs and medical claims data and then organize them to provide or sell to the sponsor.

While the FDA does not endorse any specific AI technology in its guidance, it stated that tools may include, among others, natural language processing, machine learning, and especially deep learning to extract data elements from unstructured text in electronic health records, develop computer algorithms that identify findings, or evaluate images or laboratory results.

“They are very cautious about the use of RWD, but they have definitely opened the door now,” said Xin Tao, head of the food and drug law practice at Baker & McKenzie LLP.

“I think the use of AI will be very attractive to certain companies, particularly in expanding indications, making label changes and maybe sometimes narrowing indications to avoid adverse effects,” Tao said.

But any AI methods used in regulatory decisions before the FDA “require a significant degree of human curation and decision-making,” the agency wrote in its guidelines.

The FDA warned that AI could “introduce an additional layer of data variability and quality considerations into the final study-specific dataset.”

“It's a useful tool, but only that,” Honig said. “It's a tool that must be used with conscious intention and with an end goal in mind.”

Life sciences lawyers say sponsors should heed the FDA's warning that data captured in an EHR system or network may not represent comprehensive care. According to the FDA, EHR data also may not accurately reflect the presence, characteristics or severity of a specific disease.

These warnings, and the fact that AI is also being thrown into the mix, should encourage sponsors to be cautious when choosing data sources, lawyers say.

“The challenge is that the FDA has just begun to develop policies and approaches for both real-world evidence and AI,” said Sarah Blankstein, legal counsel in the Life Sciences Regulatory and Compliance Practice at Ropes & Gray LLP.

“Their understanding and approach are still evolving, so using AI tools and real-world data will bring additional complexity and uncertainty to an application,” Blankstein added.

Several parties are affected

Healthcare lawyers are also watching how the FDA will monitor industry responsibilities when AI is used in regulatory decisions.

While FDA guidance states that it should provide guidance to sponsors and other interested parties on the use of electronic health records or medical claims data in clinical trials, the primary responsibility of drug sponsors is to submit the application to the agency and explain the evidence.

“It’s not just one party here. It’s several parties,” Tao said.

How do you delegate responsibility to ensure that this approach is as minimally burdensome as possible for the industry? Because if you leave the responsibility only to the sponsors, they may not be in the best position to implement it,” he said.

According to the FDA, when sponsors use AI or other derivation methods, the protocol must specify the assumptions and parameters of the computer algorithms used; the data source from which the information was used to create the algorithm, whether the algorithm was supervised or unsupervised; and the metrics associated with validation of the methods.

“These guidelines underscore the need to exercise real care and discipline in collecting, maintaining and using real-world data and evidence,” said Jiayan Chen, partner at McDermott Will & Emery.

“AI shows great promise in deriving insights from the data itself. The volume of information and the unstructured nature of many data sets are challenges that AI can help overcome,” Chen said.

“You still need to be very careful both in your data selection and in your use of AI tools to ensure you protect yourself from potential bias and harm.”