Skip to main content

The new journey

The new journey from research to industry and regulatory practice

One of the main messages of the Global Regulatory Science Summit held in Brasilia in September was that decision-making about safety assessment is undergoing a major transition. Many new types of heterogeneous scientific data must now be integrated as evidence into assessments. And, as medicine becomes more personalised, increasing amounts of data on individuals must also be managed.

How can we best use this new evidence? When we are operating in new contexts, so different from the traditional study designs of animal experiments or the statistical results on outcomes and safety obtained from large clinical trials, how should we analyse data and respond to results? And when we are also under new time pressures – for example, when data comes in from biomonitoring on an emergent threat, how do we make the best decisions we can, supported by available data, knowledge and computer models? This new world involves new contexts and also requirements for new practices. Although, as Hans-Georg Eichler, Senior Medical Officer, European Medicines Agency, quipped, “It may be still awhile before regulators respond to tweets.”

I think that remarks by some of the preeminent speakers at the GRSS may indicate the way forward.

Emerging data-science toolbox

Dr. Eichler, for example, elaborated scenarios where more active, data-driven pharmacovigilance is needed. Dr. Eichler’s scenarios included the stratification of patients into groups for outcome purposes where statistical power is weak; the use of advanced therapies, such as gene editing, where there may only be one intervention with one patient; cases when long-term, multifactorial monitoring of data is required on individuals for early disease interception goals for aging diseases such as Alzheimer’s Disease; and personalised, combination cancer therapies used to optimise treatment for the individual patient. Such situations, he counselled, require a careful weighing of evidence to analyze risk-benefit trade-offs using an emerging data-science toolbox of new methods, that require support and absorption into practice.

New regulatory science

An industry perspective was provided by Merck Executive Director Frank Sistare on the role of emerging technologies to provide deeper and broader mechanistic and safety profiling data before clinical studies in humans are initiated. Dr. Sistare noted that the weight of evidence recommended by ICH S1 guidelines for carcinogenicity-testing aims to reduce the need for costly, slow animal experiments (for example, two-year rat studies). Not only does this evidence mitigate initial safety clinical trial costs, it gets promising new drugs into the clinic more quickly. The pharmaceutical industry, he said, is already routinely using in silico and in vitro methods to identify off-target effects of drug candidates at an early stage e.g., to probe potential interactions (or absence) with receptors such as CAR, PPAR, ER, AhR, and p53, hence generating a (de)risking profile. Many companies and research programs are experimenting with new 3D cultures and “human on a chip” systems incorporating multi-cellular compartments and microfluidics so that in vitro systems are more physiologically representative of the human in vivo situation. It will take time to figure out how to incorporate such data into regulatory decision-making; hence, the need for the engagement and development of new regulatory science

Irreproducible each in its own way

Dr. Weida Tong, Director, Division of Bioinformatics and Biostatistics at the FDA National Centre for Toxicological Research, pointed out that simple, rule-based models developed by his group (Rule of Two, Rule of Three) proved both predictive and useful for decision-making on a complex and difficult endpoint (Drug Induced Liver Injury). Developing simpler models from the background of the volume and complexity of the new scientific research may be the key to regulatory acceptance and use, he says. Paraphrasing Tolstoy’s insight that “Happy families are all alike, but every unhappy family is unhappy in its own way”, Dr. Tong said that while all reproducible results are alike, irreproducible results are irreproducible in their own way. In bioinformatics analysis workflows in toxicogenomics, he noted, genes are not created equal either in terms of response or in statistical analysis: some genes are simply promiscuous in response or variation. The use of more collective knowledge structures, such as gene group signatures and pathway perturbations to obtain more reliable response information for interpretation, may be a promising way forward and overcome some of the inherent variations and sensitivity in gene response.

Credibility through scoring

How, then, can we best move to a more hypothesis-driven knowledge framework for decision-making? By making greater efforts in three areas, applicability, uncertainty and credibility, urged Dr. Maurice Whelan, Head of Chemical Safety and Alternative Methods, European Commission’s Joint Research Centre. Rethink our mental models of building communication and trust around new sources of data, he added, for perfection can be the enemy of good. The new world of Integrated Approaches to Testing and Assessment (IATAs) to generate targeted data against Key Events (KEs) and Key Event Relationships (KERs) provides a promising knowledge framework for decision-making, including comparison and discussion of the results of different methods. Dr. Whelan also recommended a new credibility approach scoring evidence on how data- or knowledge-rich/poor it is.

Nanotech’s reproducibility crisis

Nanotechnology is providing an increasing number of new medical applications - and the FDA is bracing for the evaluation of a wealth of new science in their dossiers. However, many serious issues remain, among them, the lack of understanding and consensus on key molecular initiating events; the challenge of characterising nanoparticles; and, in particular, the time-dependent, changing nature of nano-bio interactions. As a result, nanotechnology faces a severe reproducibility crisis: 90% of publications related to clinical applications of nanotechnology have proven irreproducible, noted Dr Anil Patri, of the FDA NCTR. This area of evidence-creation should benefit greatly from a strengthening of reproducible, well-documented scientific methods and standards.

Going forward

Having listened to these remarks and additional discussions among the regulatory science community running over two days in Brasilia, I came away with the conviction that the many new methods emerging from scientific research show much promise, but that considerable effort will be required to develop consensus on incorporating evidence from these methods into decision-making. Collective will and optimism should drive acceptance and adoption forward in the coming years. It will be insightful to see the progress we make in the next year when joining the next GRSS, to be hosted September 2018 by the Chinese FDA in Beijing.