Session 1: Reproducibility crisis

Reproducibility crisis in the laboratory
PRESENTING AUTHOR: 

Alan W. Baird

INSTITUTION / COMPANY : 

University College Dublin

POSITION: 

Professor of Veterinary Physiology & Biochemistry

REFERENCES: 

>> SEE THE PRESENTATION <<

Copyright © 2015 by Alan W. Baird
All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, write to the publisher, addressed “Attention: Permissions Coordinator,” at submissions@opentox.com.

ABSTRACT CONTENT / DETAILS: 

Three important phases of drug development are discovery, toxicological assessment and clinical application. Each of these phases is subject to varying degrees of rigorous science and regulatory review. In basic research a ‘reproducibility crisis’ has been described which has had significant implications for the first phase; drug discovery.

Common reasons for this include honest errors, poor scientific method, biased reporting and failure to publish negative results. The lack of reproducibility in basic research may have a corollary in toxicological evaluation.

The likelihood that laboratory-based findings are valid depends upon study power and the number of other studies with which comparisons may be made. It appears that certain published findings may be false, irreproducible or not generally applicable to a broader context. Finally, these circumstances influence clinical guidelines for therapeutic interventions.

These are also based on output from the previously mentioned domains which, if flawed, is a worrying situation. Ideal practice guidelines should be brief and based on scientific evidence.

If the primary data are incomplete or wrong then this has implications for how evidence is used and presented. Regulatory authorities require a database from which unbiased decisions may be made.

Avoidance of flawed or misleading data sets data requires reliability and validation steps using standardised protocols and data auditing.

Even then, the implications of much data being ‘irreproducible’ must be considered.