Skip to main content
Contact Info
Mark Cronin
Liverpool John Moores University

Mark Cronin is Professor of Predictive Toxicology at the School of Pharmacy and Biomolecular Sciences, Liverpool John Moores University, UK. He has over 35 years’ experience in the application of in silico approaches to predict the toxicity and fate of chemicals; in addition to development of strategies to develop alternatives to whole animal testing for toxicity. His current research includes the application of chemical grouping and read-across to assess human health and environmental endpoints, particularly the linking of Adverse Outcome Pathways (AOPs) to category information. This research effort has resulted in four books and over 350 publications in all areas of the use of (Q)SARs, expert systems and read-across to predict toxicity. Current research activities also include the assessment of uncertainties in silico models as well as ensuring these models are FAIR. He has worked in numerous projects in this area including more than fifteen EU projects, as well as assisting in the uptake of in silico methods for regulatory purposes.

Better Modelling Improves Non-Animal Chemical Safety Assessment

Mark T.D. Cronin

School of Pharmacy and Biomolecular Sciences, Liverpool John Moores University, Byrom Street, L3 3AF, Liverpool, UK

Computational, or in silico, approaches are fundamental in 21st Century toxicology, providing means of estimating hazard and exposure to a particular xenobiotic in a particular use case scenario. Interest in computational toxicology is growing strongly and linked to new technologies in artificial intelligence. Whilst it is attractive to create models rapidly using some of the comprehensive data sets now available, we need to remember concepts of good modelling. Approaches such as the OECD QSAR Principles and ECHA RAAF are well established to promote valid models and acceptable predictions. However, we believe that modellers interested in supporting chemical safety assessment should go further. For instance, problem formulation is well established in safety assessment, but is often lacking in computational modelling. Problem formulation allows for the aim of the model to be stated and for “acceptance” criteria to be established. To support development of high-quality models, data quality should be assessed. The problem formulation exercise should identify whether a “quick and dirty” approach to model development is sufficient, or whether more effort should be placed in quality assessment which is shown to lead to better models. Evaluation of models should be undertaken in the context of demonstrating the “credibility” of the models, i.e. their verification, validation and assessment of uncertainties. Relating to uncertainties, new approaches, based around EFSA guidance have been developed which allow for definition of “tolerable” uncertainty in areas such as read-across. When models are available, they should be FAIR. The “FAIR Lite” principles have recently been proposed which simplify the demonstration and process of FAIRification of in silico approaches.

Acknowledgements: This work was supported by the project RISK-HUNT3R: RISK assessment of chemicals integrating HUman centric Next generation Testing strategies promoting the 3Rs. RISK-HUNT3R has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 964537 and is part of the ASPIS cluster. Disclaimer: This work reflects only the author’s views, and the European Commission is not responsible for any use that may be made of the information it contains.