S2: Evidence-based chemicals assessment

Evidence-based chemicals assessment: a case for a new paradigm in literature-based (toxicological) evidence assessments, big data and AI.

Dr. Katya Tsaioun, Evidence-based chemicals assessment, OpenTox Asia 2018
PRESENTING AUTHOR: 

Katya Tsaioun

INSTITUTION / COMPANY : 

Evidence-Based Toxicology Collaboration at Johns Hopkins Bloomberg Scholl of Public Health (EBTC)

POSITION: 

Executive Director

ABSTRACT CONTENT / DETAILS: 

In recent years the public increasingly demanded transparent and objective risk assessment of chemicals that are used in their daily lives and are released into the environment. These safety assessments become the scientific foundation of risk analysis and subsequently public policy. Multiple efforts around the world are aimed at achieving this goal. In this talk we will bring attention to the methodology that has been in existence for 3 decades and has revolutionized the quality and effectiveness of clinical research and became known as evidence-based medicine (EBM). In clinical research the EBM approaches brought consistent transparency, objectivity, standardized framework and over time brought clinical research to higher standard. Systematic reviews (SRs), the principal tool of EBM, are studies that collect and quantitatively summarize available evidence in a rigorous, transparent and objective manner according to an a priori criteria described in the protocol. The EBM principles are now being applied to different problems in hazard and risk assessment by a number of agencies around the world. EB methods and SRs, when adopted by industry and regulators, can bridge the gap between evidence generation and confidence of regulatory decisions. The biggest obstacle to fully embrace these tools in chemicals risk assessment is the time and effort it takes to search, assemble and extract the evidence from the wide literature base, which are required to conduct the systematic review. This presents a “big data” problem, for which solutions are emerging. In this talk we will explore how text mining and AI can change the way we gather and synthesize evidence for chemicals risk assessment. AI-enabled processing of data can also be revolutionary in solving the problem of integrating the different types of evidence: in vitro, computational, multi-species in vivo – into a “systematic map” by conducting “instantaneous” systematic reviews of complex toxicological evidence. Moreover, this resulting “systematic map” can be overlaid on top of Adverse Outcome Pathways (AOPs), thus providing a firm evidence behind the different molecular events and identifying the gaps, where research efforts could be directed.  With the ability of processing of the data very fast, AI system can discover and exploit the relationships in data which human brains cannot easily see. Moreover, AI-based screening completely changes how we should see the relationship between researcher and data. Vast majority of SR scientists’ time is now spent on designing broad, but sensitive search strategies for multiple databases and screening processes for systematically retrieving thousands of studies. These needs are born because of the need to solve the data acquisition and organization problem in an environment of insufficient processing power to solve the problem properly. In summary, in this talk we will outline the advantages and the steps in evidence-based methodology and propose a path towards AI-powered and continuously updated evidence-based mapping that will serve as a foundation of chemicals risk assessment of the future. An example of currently conducted SR aimed at comparing in vitro, in vivo and real world evidence data streams will be presented, and discussed in detail in breakout group.