Open access peer-reviewed chapter

Animal Models in Drug Development

By Ray Greek

Submitted: May 11th 2012Reviewed: September 28th 2012Published: January 23rd 2013

DOI: 10.5772/53893

Downloaded: 3242

1. Introduction

The use of specific chemicals to treat specific diseases and disorders dates to 1910 when Paul Ehrlich and Sahachiro Hata discovered that salvarsan, also known as arsphenamine and compound 606, killed the microorganism that caused syphilis. Their research relied on animal models of syphilis as, even currently, syphilis cannot be grown in culture medium. Arsphenamine was the first synthetic drug to actually target and kill a disease-causing organism and is credited with starting the pharmaceutical age. Ehrlich is also credited with coining the term magic bullet in reference to a drug that would kill a microorganism without damaging or otherwise affecting the host of the microorganism: the patient. As I will explain, despite being an inspirational concept that led to advances in science and medicine, the notion of a magic bullet proved incomplete. Salvarsan and Ehrlich’s concept of a magic bullet are important to current concepts in drug testing because: 1) salvarsan was initially called compound 606 as it was the 606th compound tested on animals in an attempt to find a treatment for syphilis; and 2) the concept of a magic bullet was based on the scientific process known as reductionism. In this chapter, I will explore the reductionist approach of using animal models in drug development, especially in toxicity testing.

2. Reductionism and complexity

The use of animals as models for human anatomy and pathophysiology dates back millennia but the modern version began with Claude Bernard in the 19th century. Bernard was a firm believer in the reductionist approach to medical science and that approach has indeed served biomedical science well for decades. A review of reductionism will allow us to contrast this approach to understanding the material universe with systems biology, which is needed in order to fully understand complex living systems. [1-13]

Ernst Mayr defines reductionism as: “The belief that the higher levels of integration of a complex system can be fully explained through a knowledge of the smallest components.”[[14] p290] For example, physics attempts to describe the universe in terms of a few elementary particles, and the relationships among them. Reductionism has been very successful in describing many aspects of the material universe, including allowing successful predictions to be made. Reductionism is associated with Newton, Descartes, and determinism and the reliance on animal models in medical science arose during the time of Newtonian physics vis-à-vis reductionism and determinism. Newton said: “Therefore to the same natural effects we must, as far as possible, assign the same causes” and went on to explain that this rule applies “to respiration in a man and in a beast, the descent of stones in Europe and America, the light of our culinary fire and of the sun, the reflection of light in the earth and in the planets.”[[15] p3-5] Both Newton and Claude Bernard subscribed to the position that similar causes yield similar effects. Indeed, this concept was one of the breakthroughs that led to the systematic method of inquiry known as the resoluto-compositive method or method of analysis and synthesis. This concept of causal determinism rests on two claims. First, all events have causes, and second, for qualitatively identical systems, the same cause is followed by the same effect. Causal determinism is a presupposition of much scientific activity. The idea that results in the laboratory can be extended to form expectations about qualitatively similar systems outside the laboratory is embodied in this idea, as is the claim that experiments should be replicable. [16] This was how science viewed the universe, including animate bodies, when the animal model was embraced by science in the 19th century.

Claude Bernard was a strict causal determinist, meaning that if X caused Y in a monkey it was also cause Y in a human. Bernard stated: “Physiologists... deal with just one thing, the properties of living matter and the mechanism of life, in whatever form it shows itself. For them genus, species and class no longer exist. There are only living beings; and if they choose one of them for study, that is usually for convenience in experimentation.”[[17] p 111] Further complicating matters, Bernard and many of his colleagues rejected the notion of evolution put forth by Darwin. [17-19] Bernard thought that organs and other tissues were interchangeable among animals and that all differences could be accounted for based on scaling; the chief difference between humans and animals being a soul.[19] This thinking persists even in recent times as exemplified by the baboon heart transplant in to the recipient Baby Fae, performed by the creationist surgeon Leonard Bailey of Loma Linda University in 1984. [[20] p162-3]

However, recent advances in other disciplines of science, namely chaos and complexity along with evolutionary biology, have called into question the use of reductionism as the sole factor in studying complex systems. Moreover, the developments in evolutionary biology and genetics are cause for further concern regarding the use of one complex evolved system, say a mouse, to predict responses to perturbations such as disease and drugs for another differently evolved complex system, say a human. For example, we now understand that the same gene can be used in different ways among species and that knocking out a gene in one species is not predictive for the function of that gene in another species.[21-27] This has implications for drug development.

Reductionism was used to study simply systems as opposed to complex systems. Animals, including humans, are complex systems and as such exhibit the characteristics listed below [from [28]].

  1. Complex systems are robust, meaning they have the capacity to resist change. [8, 9, 29-35] This can be illustrated by the fact that knocking out a gene in one strain of mouse may produce no noticeable effects.

  2. Redundancy tends be a part of complex systems and may explain some aspects of robustness. For example, many members of the kingdom Animalia exhibit gene redundancy. [8, 9, 29-35]

  3. Different parts of a complex system are linked to and affect one another in a synergistic manner. In other words, there is positive and negative feedback in a complex system. [36] This is why overloading one part of a complex system with say vitamins, may not result in a healthier individual. The feedback system results in the rest of the system acting to simply excrete the unneeded vitamins.

  4. Complex systems are also modular. But failure in one module does not necessarily spread to the system as a whole as redundancy and robustness also exist. [37-40]

  5. The modules do communicate though. For example, genes tend to be part of networks, genes interact with proteins, proteins interact with other proteins and so on.

  6. Complex systems communicate with their environment—are dynamic. [37-40]

  7. Complex systems are very dependent upon initial conditions. [39] For example, very small changes in genetic makeup can result in dramatic differences in response to perturbations of the living system.

  8. The causes and effects of the events that a complex system experiences are not proportional to each other. Perturbations to the system have effects that are nonlinear, in other words large perturbations may result in no change while small perturbations may cause havoc. [37-40]

  9. The whole is greater than the sum of the parts. [1, 8, 9, 30, 39]

  10. Complex systems have emergent properties. An emergent property cannot be predicted by full knowledge of the component parts. For example, the formation of a flock of birds and hurricanes are examples of emergent phenomenon as is perhaps consciousness. [39]

Reductionism is essentially divide and conquer. By dividing a system into its parts and ascertaining the functions of all the parts of the system, one can deduce the function of the entire system. The gears of a Swiss watch, for example, are capable of description on their own, without reference to the system from which they are removed. Conversely, the individual components of a complex system must be described based on the interaction of the parts. Describing individual components in isolation, regardless of how detailed such a description is, cannot fully describe the complex system as a whole. The whole is greater than the sum of the parts. A complex system must be described based on the organization of the individual components. [41, 42]

Miska states:

The basic analytical method that is behind most biomedical research can be traced back over 300 years to Descartes’s essay Discourse on Method, which argued that an animal is a clock-like machine in which the parts and their relationships to one another are precise and unchangeable, and in which causes and effects can be understood by taking the pieces apart. This so-called ‘reductionist’ approach to understanding biology and medicine has been very productive, but is now up against problems that require different frameworks, institutionally and intellectually. [43]

Nicolis & Prigogine defined complexity as the ability of a system “to switch between different modes of behavior as the environmental conditions are varied.”[44] In other words, complex systems are able to adapt to their environments just as life on this planet has adapted resulting in different species. But these adaptions mean that two complex systems that were originally identical would now be less similar and behave differently in certain circumstances. An example of this would be the susceptibility to disease between monozygotic twins. [45-56] Van Regenmortel states:

Reductionists tend to disregard the fact that all biological systems possess so-called emergent properties that arise through the multiple interconnections and relations existing between individual components of the system. These emergent, relational properties do not exist in the constituent parts and they cannot be deduced or predicted from the properties of the individual, isolated components [[57]p258]. Examples of emergent properties are the viscosity of water (individual water molecules have no viscosity), the colour of a chemical, a melody arising from notes, the saltiness of sodium chloride, the specificity of an antibody and the immunogenicity of an antigen. [58]

Living complex systems are the result of various evolutionary processes and as such are arguably the most complex of all complex systems. Species differ because of the presence of different genes, mutation in the same genes, a difference in the number of the same allele (copy number variants), the same genes may be regulated or expressed differently, alternative splicing, the presence of modifier or background genes, differences in gene networks and protein networks, and convergent evolution where two species share a trait but the trait evolved independently in each. Individuals of the same species may differ for many of the above reasons but also because of dissimilarities in environmental exposures. [50] Importantly, each of the above means that different species as well as individuals of the same species manifest differences in the initial conditions of their complex system. The above also translates into differences in other characteristics of a complex system such as robustness and redundancy.

The progress in these two areas of science, complexity science and evolutionary biology, results in strong theoretical concerns regarding the use of animals as predictive models in drug development. We should expect animals and humans to share responses to perturbations at the level of organization where complex systems can be described as simple systems but not for perturbations occurring at the level of organization where the system as a whole is studied or where parts of the systems that are themselves complex are studied. I will next examine the empirical evidence and place it in the context of these theoretical concerns.

3. Prediction in science

The third relevant advance in science since animal models were mandated for use in drug development is the formal evaluation of animal models in terms of their predictive value for humans. Animal models are used for ascertaining the properties of absorption, distribution, metabolism, elimination and toxicity (ADMET). As all of these properties influence toxicity, an examination of the ability of animal models to predict these properties is important, as is the straightforward examination of animal models for toxicity itself. The answer to the question of the predictive ability of animal models was hinted at by the fact that Ehrlich and Hata ultimately tested the 606th compound of a series in their attempt to find a treatment for syphilis. Previous compounds had successfully treated syphilis in animal models but had failed for various reasons in humans. Even salvarsan resulted in side effects in humans that were unforeseen in animal models.

The ability to predict facts about the material universe is a hallmark of science. Hypotheses are generated that make predictions about the phenomena under study and the success or failure of these predictions can falsify or strengthen the hypothesis. This use of the term predict differs from determining whether a modality, practice, or test is predictive for its purpose. For example, a CT scan of the chest is a predictive test for diagnosing a pneumothorax, because the CT scan, as opposed to a chest x-ray, is successful in locating the pneumothorax essentially 100% of the time. In order to evaluate a modality like CT scans, a blood test for cancer, or even the use of dogs for catching drug smugglers in airports, the calculations in table 1 are employed.

When evaluating the predictive value of methods, practices, or tests for use in biomedical science, positive predictive value (PPV) and negative predictive value (NPV) > 0.9 are sought. If a single test alone cannot yield such high values then a combination of tests can be evaluated in hopes that the combination will meet the criteria. Such evaluations have been made for toxicity testing using animal models as well as other animal model-based tests in drug development. Profound inter-species differences, as well as inter-individual human differences, have been revealed for absorption [[59] p 8-10] [[60] pp 5, 9, 45, 50, 66-7, 90, 102-3,] [61-64], distribution [65, 66], metabolism [67-77], elimination [78, 79], and toxicity [64, 80-91], which results in predictive values for these animal models that are far below those required in biomedical science. For example, Litchfield conducted a classic study in 1962 comparing toxicity among three species: humans, rats, and dogs. The positive predictive values for the animal models were between 0.49 and 0.55. [92] Similarly, Suter compared toxicities for ergoloid mesylates, bromocriptine, ketotifen, cyclosporine, FK 33-824, and clozapine in animals and humans. The sensitivity for toxicity for the animal tests was 0.52 and the predictive value positive was 0.31. [93] Fourches et al. evaluated animal human data for 1061 compounds known to cause hepatotoxicity in humans and found that the concordance or sensitivity among species was around 39-44%.[94] The positive and negative predictive values could not be calculated from the article but would be well below 0.39. Smith and Caldwell studied twenty-three chemicals and discovered that only four were metabolized the same in humans and rats. [70] Sietsema [95], compared the oral bioavailability of 400 drugs in humans with three other species (see Figure 1) and concluded the data was consistent with a “scatter-gram.” Similar results have been obtained from other studies.[84, 96-101]

Gold Standard
GS+GS-
TestT+TPFP
T-FNTN
T+ = Test positive
T- = Test negative
T = True
F = False
P = Positive
N = Negative
GS+ = Gold standard positive
GS- = Gold standard negative
Sensitivity = TP/(TP+FN)
Specificity = TN/(FP+TN)
Positive Predictive Value = TP/(TP+FP)
Negative Predictive Value = TN/(FN+TN)

Table 1.

Binomial classification method for calculating sensitivity, specificity, positive predictive value, and negative predictive value when comparing a modality, practice, or test with a gold standard.

Figure 1.

Variation in bioavailability among species. (Based on data from [95].)

The fact that animal models lack predictive ability is well known.[102-110] This shortcoming includes the inability of animal models to be predictive modalities for carcinogenicity.[111, 112] Salsburg stated: “Thus the lifetime feeding study in mice and rats appears to have less than a 50% probability of finding known human carcinogens. On the basis of probability theory, we would have been better off to toss a coin...”[111]

The general attitude in the drug development-related sciences reflects the empirical evidence. Cook et al:

Over many years now there has been a poor correlation between preclinical therapeutic findings and the eventual efficacy of these [anti-cancer] compounds in clinical trials [109, 110].... The development of antineoplastics is a large investment by the private and public sectors, however, the limited availability of predictive preclinical systems obscures our ability to select the therapeutics that might succeed or fail during clinical investigation. [108]

Reuters quoted Francis Collins, Director of the NIH, as stating that: “about half of drugs that work in animals may turn out to be toxic for people. And some drugs may in fact work in people even if they fail in animals, meaning potentially important medicines could be rejected.”[113] Alan Oliff, former executive director for cancer research at Merck Research Laboratories in West Point, Pennsylvania asserted in 1997: “The fundamental problem in drug discovery for cancer is that the [animal] model systems are not predictive at all.”[114] Björquist and Sartipy stated: “Furthermore, the compound attrition rate is negatively affected by the inability to predict toxicity and efficacy in humans. These shortcomings are in turn caused by the use of experimental pre-clinical model systems that have a limited human clinical relevance...”[115] In 2006, then U.S. Secretary of Health and Human Services Mike Leavitt declared: “Currently, nine out of ten experimental drugs fail in clinical studies because we cannot accurately predict how they will behave in people based on laboratory and animal studies.”[116] Zielinska, writing in The Scientist supported the above, stating:

Mouse models that use transplants of human cancer have not had a great track record of predicting human responses to treatment in the clinic. It’s been estimated that cancer drugs that enter clinical testing have a 95 percent rate of failing to make it to market, in comparison to the 89 percent failure rate for all therapies... Indeed, “we had loads of models that were not predictive, that were [in fact] seriously misleading,” says NCI’s Marks, also head of the Mouse Models of Human Cancers Consortium... [117]

The inability of animal models to predict human response has also increased the cost of drug development as the cost for the 90-95% of drugs that fail must be recouped from the ones that go to market.[91, 118-121] Lost revenue has also resulted from the drugs that would have been marketable had animal models not derailed them in development. This lack of predictive ability for animal models is largely to blame for the cost of new medications and for the fact that the drug development pipeline is drying up.[115, 122, 123] Because animal models fail to predict drugs destined to fail, these drugs go to clinical trials and marketing which consumes roughly 95% of the cost for drug development.[124, 125] Catherine Shaffer, Contributing Editor of Drug Discovery & Development, wrote in 2012: “Drug development is an extremely costly endeavor. Estimates of the total expense of advancing a new drug from the chemistry stage to the market are as high as $2 billion. Much of that cost is attributable to drug failures late in development, after huge investments have been made. Drugs are equally likely to fail at that stage for safety reasons, as for a lack of efficacy, which is often well-established by the time large trials are launched.”[120] Roy estimates the real cost is even higher: “The true amount that companies spend per drug approved is almost certainly even larger today. Matthew Herper of Forbes recently totaled R&D spending from the 12 leading pharmaceutical companies from 1997 to 2011, and found that they had spent $802 billion to gain approval for just 139 drugs: a staggering $5.8 billion per drug.”[125] Kenneth Kaitin, director of Tuft’s Center for the Study of Drug commenting on Pharma’s drying pipeline in the March 7, 2011 New York Times, stated: “This is panic time, this is truly panic time for the industry.” Even when a drug does reach the market, there is a great amount of uncertainty regarding safety. For example, over 1000 drugs that reached the market were discovered to result in hepatotoxicity. [126]

Kirschner addressed this issue, asking: “could we develop a better way of predicting whether a drug will work or have intolerable side effects?” He then explains the problem in terms similar to what I have presented above:

In part, this problem stems from the fact that we rarely have a situation in which one gene can be linked to one disease and targeted by one drug. The nature of our biological system is that we have relatively few genes — say 20,000 basic core genes — that are used over and over again in different contexts. So when we investigate targets, we need to better appreciate how these function in different contexts. Moreover, there are many overlapping and redundant pathways, so we need to better understand genes not as individual elements with individual functions but within the context of the circuits in which they operate. This approach requires not just a wiring diagram, but a quantitative wiring diagram... [127]

4. This leads us to current efforts at improving drug development

4.1. Twenty-first century science

Today we have options for drug development and toxicity testing that did not exist until the 21st century, for example microdosing and pharmacogenomics. Two points need be emphasized before I address these two advances, however. First, animal models fail to meet the ends for which they are used; they are not predictive modalities for human response. Therefore using animal models is akin to relying on bloodletting as a treatment for cancer when oncologists have no cures for the cancer in question. Just as bloodletting is not effective as a treatment for cancer, regardless of whether or not other options are available, so employing animal models as they are currently utilized is nonsensical.

Second, technology is available, or is being developed, that will at least predict human response for certain properties important in drug development. However, regardless of how much time is needed in order for these technologies to be developed, animal models are simply ineffective and hence should be abandoned. Lack of effective technology does not justify the utilization of methods proven to be ineffective. Regardless of the technologies available, drug development must be human-based both when reductionism is used and when complexity is relevant. Basing drug development decisions on drug targets identified from animal models has not been effective. Human tissues can be studied instead and this will allow targets to be established in a more reliable manner. Humans must also be studied when responses to drugs are occurring at higher levels of organization; where the system is complex.

In 2006, the FDA approved microdosing for Phase 0 clinical trials.[128, 129] Microdosing is the process whereby very small doses of a drug are administered to human volunteers after which positron emission tomography (PET) and accelerator mass spectrometry (AMS) are used to assess pharmacokinetic (PK) data.[130-132] While animal models are used to inform the dose for the first administration of the drug, the usual range for drugs is 100ng to 100μg. If all drugs were initially administered at a dose of 1ng and subsequently increased, this would obviate the use of unreliable animal models and ensure that the first-in-human dose was lower than the most toxic substance currently known.[133, 134] This would be a reliably safe method for conducing first-in-human trials. Although in practice microdosing is currently only used to evaluate PK (as opposed to pharmacodynamics, which is abbreviated as PD), it could be used for evaluating the other properties of interest. For example, by increasing the dose incrementally, the drug could be evaluated for toxicity. This solves the problem of unanticipated catastrophic reactions such as occurred in the TGN1412 trial [135] and allows toxicity to be determined very early in the drug development process. Long term carcinogenicity studies could not be conducted in this fashion however animal models are not predictive for carcinogenicity and human data from long terms use is the de facto method now used. Nothing would be lost by eliminating long-term carcinogenicity studies in animals until predictive technologies are developed. According to the Centers for Disease Control and Prevention (CDC): “Most of what we know about chemicals and cancer in humans comes from scientists' observation of workers. The most significant exposures to cancer-causing chemicals have occurred in workplaces where large amounts of toxic chemicals have been used regularly.”[136]

The concept of microdosing, used in combination with pharmacogenomics (see below) would allow go-no go decisions to be made early and reliably in drug development as well as matching drug to patient. The transition to full-scale clinical trials would also be seamless. As the dose was increased, an evaluation of efficacy could be made. By starting the dose at 1ng and increasing, the entire clinical trial could be conducted much more reliably and efficiently, drugs destined to fail could be eliminated earlier thus saving money, and the drugs could be matched to genotype before being marketed thus further saving money and decreasing side effects. This leads us to the concepts of pharmacogenomics and personalized medicine.

Personalized medicine seeks to individualize medicine both in terms of treatment and diagnosis while pharmacogenomics matches drugs to patients. Rashmi R Shah, previous Senior Clinical Assessor, Medicines and Healthcare products Regulatory Agency, London stated in 2005: “During the clinical use of a drug at present, a prescribing physician has no means of predicting the response of an individual patient to a given drug. Invariably, some patients fail to respond beneficially as expected whereas others experience adverse drug reactions (ADRs).”[137] Shah echoed comments by Allen Roses, then-worldwide vice-president of genetics at GlaxoSmithKline (GSK), who stated that fewer than half of the patients prescribed some of the most expensive drugs derived any benefit from them: “The vast majority of drugs - more than 90% - only work in 30 or 50% of the people.”[138] That individual humans respond very differently to disease and drugs [139, 140], including vaccines [141, 142], has long been appreciated. During the Korean War, Alving observed that black soldiers had an increased probability, compared with white soldiers, of developing anemia when from antimalarials. This was discovered to be secondary to a commonly occurring enzyme deficiency in the black soldiers.[143] Variation in disease susceptibility and response to drugs has been noted to exist between sexes [144-150] and ethnic groups [151-159] as well as between monozygotic twins.[45-52, 56]

Many advances have been made in linking drugs to genes, in part because of spin-offs from the Human Genome Project. Differences between humans including single nucleotide polymorphisms, copy number variants, differences the regulation and expression of the same genes, differences in gene networks, and the influence of background genes can result in a drug being efficacious for one patient but not another. Diseases vary intra-species as well. Michael Snyder, chair of genetics at Stanford University School of Medicine, recently stated: “However, the bulk of the differences among individuals are not found in the genes themselves, but in regions we know relatively little about. Now we see that these differences profoundly impact protein binding and gene expression.”[160, 161] Hunter et al studied a mouse model of cancer and discovered differences in metastatic efficiency secondary to background genes. Hunter et al:

Because all tumors were initiated by the same oncogenic event, differences in the metastasis microarray signature and metastatic potential are probably due to genetic background effects rather than different combinations of oncogenic mutations. Consistent with our observations in metastasis, several laboratories have shown similar strain differences with regard to oncogenesis, aging and fertility in transgenic mouse models.[162-164] Data on both primary tumors and metastases reinforce the notion that tumorigenesis and metastasis are complex phenotypes involving both inherent genetic components and cellular responses to extrinsic stimuli. [165]

Thein likewise stated: “As the defective genes for more and more genetic disorders become unravelled, it is clear that patients with apparently identical genotypes can have many different clinical conditions even in simple monogenic disorders.” Thein assessed β−thalassemia and noted that the clinical manifestations are very diverse, ranging from life threatening to asymptomatic. Thein: “The remarkable phenotypic diversity of the β−thalassemias is prototypical of how a wide spectrum of disease severity can be generated in single gene disorders.... relating phenotype to genotype is complicated by the complex interaction of the environment and other genetic factors at the secondary and tertiary levels...”[166]

Agarwal and Moorchung reinforce the above stating: “It is now increasingly apparent that modifier genes have a considerable role to play in phenotypic variations of single-gene disorders.” This is due to factors such as: “Oligogenic disorders occur because of a second gene modifying the action of a dominant gene. It is now certain that cancer occurs due to the action of the environment acting in combination with several genes.”[167] Friedman and Perrimon explain that there are “hundreds of potential regulators of known signaling pathways.”[168] PLoS Biology, in an editorial said the following about mouse models of autoimmune diseases: “These results fall in line with mounting evidence that background genes are not silent partners in gene-targeted disease models, but can themselves facilitate expression of the disease. This finding underscores the notion that genes are not solitary, static entities; their expression often depends on context. With genetically complex diseases, having the requisite combination of susceptibility genes does not always lead to disease.”[169]

Liu et al explain why the same genes can result in very outcomes:

A general view is that critical genes involved in biological pathways are highly conserved among species. To understand human autoimmune diseases, a great deal of effort has been devoted to the study of murine models that mirror many pathologic properties observed in the human disease. We have found that lymphocytes from humans with different autoimmune disease all carry a common conserved gene expression profile. Therefore, we wanted to determine if lymphocytes from common murine models of autoimmune disease carried a gene expression profile similar to the human profile and if both mouse models carried a shared gene expression profile. We identified numerous differentially expressed genes (DEGs) in the autoimmune strains compared to non-autoimmune strains. However, we found very little overlap in the gene expression profile between human autoimmune disease and murine models of autoimmune disease and between different murine autoimmune models. Our research further confirms that murine models of autoimmunity do not perfectly match human autoimmune diseases. [26]

Weiss et al continues this theme:

In contrast to these single gene effects, many drug treatment response phenotypes are complex, produced by multiple coding and regulatory variants in multiple genes that often interact in a signalling pathway. In these cases, each variant could contribute to the variance in the phenotype and there is no clear model of genetic inheritance. Genetic factors that influence whether a drug treatment response is complex include mode of inheritance (recessive versus dominant or additive); pleiotropy; incomplete penetrance; and epistasis, due to gene–environment interactions and environmental phenocopies. All of these factors contribute to the complexity of the response phenotypes. [170]

Gabor Miklos states:

There is enormous phenotypic variation in the extent of human cancer phenotypes, even among family members inheriting the same mutation in the adenomatous polyposis coli (APC) gene believed to be causal for colon cancer. In the experimental mouse knockout of the catalytic gamma subunit of the phosphatidyl-3-OH kinase, there can be a high incidence of colorectal carcinomas or no cancers at all, depending on the mouse strain in which the knockout is created, or into which the knockout is crossed... [27]

Because of advances alluded to above, society is seeing the death of the blockbuster and the arrival of the “niche buster.” [171] Herscu et al write: “The era of the 'blockbuster drug model' is ending, and the development of personalized pharmaceutical system is on the rise.”[172] This is also due to the fact that diseases are being categorized into more types and individuals even within the same type react differently to drugs. Herscu et al write:

Diabetes mellitus, for example, was simply divided into juvenile or adult onset types for many years. Now we have pre-diabetes; Type I, broken into immune-related and other causes; Type 2, broken into secretory defect and insulin-resistant types; and more than 11 types that have been linked to specific genetic defects. However, even diabetic patients in a precisely defined category with shared genetic markers differ because they exist at different points along the continuum of the disease depending on their diet, exercise, comorbid conditions and other factors. These phenotypic dissimilarities are the source of inter-patient variability, which confounds both clinical trials and treatment results. [172]

Iressa was one of the first medications administered to patients based on genotype. Iressa did not perform well in clinical trials and was to be abandoned but clinicians were adamant that it helped some people with cancer. By genotyping the patients that responded well to Iressa, researchers were able to confirm that, in certain genotypes, Iressa was efficacious. Numerous drug responses have been matched to specific mutations. [77, 173-176] The Personalized Medicine Coalition notes that personalized medicine will allow patients and physicians to:

  • select optimal therapy and reduce "trial-and-error" medicine;

  • reduce adverse drug reactions;

  • improve the selection of drug targets;

  • increase patient compliance with therapy;

  • reduce the time, cost, and failure rate of clinical trials;

  • revive drugs that failed clinical trials or were withdrawn from the market;

  • avoid withdrawal of marketed drugs;

  • shift the emphasis in medicine from reaction to prevention; and

  • reduce the overall cost of healthcare.[177]

5. Conclusion

We are currently living in what will become known as the Age of Personalized Medicine. While much has yet to be discovered, society is already benefitting from personalized medicine applied to specific drugs and diseases. Contrast this with using a different species in an attempt to predict human response to drugs and disease. While animals can be used in basic science pursuits, empirical evidence from drug development, placed in the context of the scientific theories of Complexity and Evolution, demands that animal testing be replaced with human-based drug development. Implementing human-based testing early in the development process is how drugs should be developed now and it will be how drugs are developed in the future.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Ray Greek (January 23rd 2013). Animal Models in Drug Development, New Insights into Toxicity and Drug Testing, Sivakumar Gowder, IntechOpen, DOI: 10.5772/53893. Available from:

Embed this chapter on your site Copy to clipboard

<iframe src="http://www.intechopen.com/embed/new-insights-into-toxicity-and-drug-testing/animal-models-in-drug-development" />

Embed this code snippet in the HTML of your website to show this chapter

chapter statistics

3242total chapter downloads

4Crossref citations

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Renal Transporters and Biomarkers in Safety Assessment

By P.D. Ward, D. La and J.E. McDuffie

Related Book

First chapter

Cell Interaction During Larval-To-Adult Muscle Remodeling in the Frog, Xenopus laevis

By Akio Nishikawa

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More about us