Open access peer-reviewed chapter

Scientific and Ethical Considerations for Increasing Minority Participation in Clinical Trials

Written By

Julius M. Wilder

Submitted: 13 December 2016 Reviewed: 21 June 2017 Published: 09 May 2018

DOI: 10.5772/intechopen.70181

From the Edited Volume

Clinical Trials in Vulnerable Populations

Edited by Milica Prostran

Chapter metrics overview

1,197 Chapter Downloads

View Full Metrics

Abstract

Since its inception, a major weakness in clinical trial research has been an inability to recruit diverse populations into clinical trials. These under-represented populations are mostly comprised of the poor, the elderly, children, women, and racial/ethnic minorities (African Americans and Hispanics). This fundamental weakness is further exacerbated by the fact that these same groups are often disproportionately affected by the diseases being studied in clinical trials. There are various patient specific, provider specific, and policy related causes for the existence of these disparities. Regardless of the cause, the lack of participation of these groups in clinical trials raises important questions about the quality and ethics of clinical research. The goal of this document is to discuss the evidence and reasons behind disparities in clinical trial participation. We also provide a discourse on potential mechanisms to address disparities in clinical trial accrual including the ethical considerations of financial incentives, the impact of a more stringent policy and review process for product approval from the Food and Drug Administration (FDA) including a diversity mandate with an associated population black box warning.

Keywords

  • African American
  • hispanic
  • clinical trials
  • underrepresented
  • health disparities
  • food and drug administration (FDA)

1. Introduction

Since its inception, a major weakness in clinical trial research has been an inability to recruit diverse populations into clinical trials. These under-represented populations are mostly comprised of the poor, the elderly, children, women, and racial/ethnic minorities (African Americans and Hispanics). This fundamental weakness is further exacerbated by the fact that these same groups are often disproportionately affected by the diseases being studied in clinical trials. There are various patient specific, provider specific, and policy related causes for the existence of these disparities. Regardless of the cause, the lack of participation of these groups in clinical trials raises important questions about the quality and ethics of clinical research. The goal of this document is to discuss the evidence and reasons behind disparities in clinical trial participation. We also provide a discourse on potential mechanisms to address disparities in clinical trial accrual including the ethical considerations of financial incentives, the impact of a more stringent policy and review process for product approval from the Food and Drug Administration (FDA) including a diversity mandate with an associated population black box warning.

Advertisement

2. Evidence of disparity among racial/ethnic minorities

The composition of minorities in clinical trials has historical been low. Between 1996 and 2002, blacks represented on average 9.3% of the total number of enrollees in cancer clinical trials [1]. That number peaked at 11% in 1996 and steadily declined to 7.9% in 2002 [1]. Similarly, Hispanics on average represented on average 3.1% of the total number of enrollees in cancer clinical trials between 1996 and 2002 [1]. Not only are minorities under-represented in clinical trials, but the overall racial and ethnic composition of clinical trials is not reported at an acceptable rate. Between 1990 and 2000, only 35.1% of treatment studies among cancer clinical trials reported race/ethnicity [2]. This number increased to 51.6% from 2001 thru 2010, but the percentage of blacks included in the analysis for the second decade decreased by 42% [2]. The lack of diversity goes beyond race and ethnicity. Similar trends are found in women, where they only represented 26.5% of the population in prevention studies between 2001 and 2010 [2].

The disparity in clinical trial participation is not limited to cancer clinical trials and the scientific impact can be significant in terms of outcomes. African Americans are disproportionately affected by hepatitis C virus (HCV) in the United States and HCV infection is the leading cause of cirrhosis and hepatocellular carcinoma and the most common indication for liver transplantation in the United States [3]. Although African Americans comprise approximately 13% of the US population, they make up approximately 23% of Americans with hepatitis C [4]. The rate of a positive HCV antibody test was higher in blacks than in whites (3.2% versus 1.5%) and black men had higher rates of infection, with the highest prevalence rate (9.8%) among black men ages 40–49 years [5, 6]. HCV treatment has undergone a rapid evolution in treatment with the first generation protease inhibitors released in 2011 and now multiple direct acting antivirals regimens with amazing outcomes [7, 8].

We performed a meta-analysis of clinical trials on hepatitis C treatment between January 2000 and December 2011 [9] to evaluate the participation of African Americans in hepatitis C clinical trials given the tremendous burden of that disease within this population. We reviewed 588 randomized controlled clinical trials on hepatitis C treatment with interferon 2a or 2b between January 2000 and December 2011. Of the 588 reviewed, 314 (53.4%) fit inclusion criteria [9]. This meta-analysis showed that of the 314 RCT’s that met search criteria, only 123 (39.2%) actually reported race. We evaluated clinical trials in North American and Europe, and found significant differences. The clinical trials performed in North America were more likely to report racial data than European trials, although racial reporting overall increased over time in North America and Europe. Our main outcome was the rate of African American/black participation in North American HCV clinical trials [10]. There was a statistically significant difference among the expected and observed participation of African Americans in HCV clinical trials in North America based on the prevalence of this disease within the population. The observed rate was 0.148 (95% CI, 0.126–0.174). Therefore, among those clinical trials reporting race, African Americans were significantly under-represented, especially given the disproportionate burden of hepatitis C within this population [9].

Advertisement

3. Reasons for the disparity

The persistent disparity in minority clinical trial participation is a result of a combination of historical, demographic, and socioeconomic factors. These complex issues combine to create barriers preventing clinical researchers from reaching communities and barriers preventing communities from engaging in clinical trial research.

Socioeconomic status (SES) is a major contributor to disparities in minority clinical trial participation. The mechanism through which SES impacts minority trial accrual is primarily through individual patient level access to resources associated with clinical trial participation. Lower levels of education and income are known to correlate with a lack of insurance and underinsurance. Previous estimates have shown that up to 30% of the US population is underinsured or uninsured [11]. While the “Patient Protection and Affordable Care Act, 2010” will significantly improve access to health insurance, the impact on clinical trial participation will need to be studied given the lack of uniformity in implementation across the individual states in the US. Low SES presents issues with transportation as well [12]. Hence, patients with limited transportation resources are more likely not to be able to logistically make the follow up appointments associated with participation in a clinical trial [1, 12]. The cost of transportation and clinical trial research participation goes beyond typical costs such as fuel and mileage. Many patients of lower SES cannot afford to miss the time from work required for clinical trial visits as well. A lack of education and awareness of cancer and clinical trials has been shown to contribute to reduced participation in clinical trials [12, 13, 14, 15] due to a lack of knowledge concerning the cancer diagnosis, treatment options, and precisely what is clinical research. One’s SES is often reflected in the neighborhood that they live. Unfortunately, living in lower SES neighborhoods (be they urban or rural) reduces the likelihood of one having access to clinical trial research [12].

Cultural issues related to race and ethnicity also contribute to disparities in clinical trial participation. The race of one’s provider can impact access to clinical trials. Under-represented minorities are more likely to receive healthcare from a physician who is also a minority [1, 16]. Few minority physicians are engaged in clinical research and this thus reduces opportunities for the minority populations they serve to be engaged in clinical research. A lack of culturally appropriate educational materials targeting individuals with diverse cultural backgrounds and in whom English is their second language contributes to an inability to recruit individuals of Hispanic and other ethnic backgrounds [12, 17, 18, 19].

Provider characteristics also contribute to the lack of diversity within clinical trials. Provider attitudes towards a patients age, comorbidities [20, 21], physician perception of mistrust of researchers [21, 22], and lack of physician awareness of clinical trials [21, 23] all contribute to disparities in clinical trial research. Furthermore, studies targeting under-represented minorities (Hispanics and blacks) have found that provider miscommunication including lack of compassion, lack of respect, and perceived mistrust have all contributed to minorities hesitating to engage in clinical trial research [21]. The legacy of the Tuskegee Syphilis experiment still resonates among blacks in the United States when confronted with issues around clinical trial research. The Tuskegee study involved 400 African American males with syphilis who were systematically denied treatment from 1932 to 1972 even though a known treatment existed [24, 25]. This was the longest nontherapeutic experiment on human beings in medical history [25]. These men were largely illiterate and uninformed about the risks and benefits of this study despite the existence of US policy to protect clinical research subjects [24]. The social and political significance of the US Public Health Service performing unethical research on African Americans following the Civil Rights Movement of the 1960s was monumental and continues to contribute to the mistrust among African Americans for clinical research today [24, 25].

Advertisement

4. Addressing disparities in clinical trial accrual

Policy initiatives have been implemented in an attempt to address a lack of diversity in clinical trials. The National Institute of Health (NIH) Revitalization Act of 1993 created a mandate for the appropriate inclusion of minorities in all NIH-funded research projects [26]. Twenty years following the implementation of this act, the accrual and reporting of minorities within clinical trials remains inadequate. A recent review shows that the reporting of race/ethnicity ranged from 1.5% to 58% with only 20% of RCT’s in high impact journals reporting subgroup analyses by race/ethnicity [27]. The failure of the 1993 Revitalization Act to address disparities in minority clinical trial accrual may be related to it only impacting NIH funded studies. From 1990 to 2000, a period that includes implementation of the 1993 NIH Revitalization Act, 61.5% of cancer clinical trials only used funding from NIH or other federal funding resources [2]. During this same time period, the percentage of trials only using pharmaceutical funding was 13.2% [2]. However, in the next decade (2001–2010), the percentage of cancer clinical trials relying solely on NIH or federal funding decreased to 35.9% and the percentage of trials relying solely on pharmaceutical funding increased to 50.4% [2]. Therefore, the research landscape has changed with respect to primary funding resources. Any policy attempting to impact how research is done must consider the primary source of funding for clinical trial research today.

The 1997 Food and Drug Administration Modernization Act (FDAMA) [28, 29] is an example of policy recognizing funding sources and utilizing it to implement positive change in accrual for clinical trials. This law included a “Pediatric Exclusivity Provision” which provided an additional 6 months of patent protection, or marketing exclusivity in return for performing studies specified by the FDA [28]. The provision for economic incentives was then extended by the “Best Pharmaceuticals for Children Act of 2002” [28, 30]. Following this, “The Pediatric Research Equity Act” in 2003 allowed the FDA to require pediatric studies of certain drugs and biological agents [31]. The purpose of these laws was to provide financial motivation for pharmaceutical companies to engage in pediatric clinical research. Children, similar to minorities the elderly, the poor, and women, were a group often not included in clinical trials. In return for their willingness to provide drug labeling information on children and increase delivery of biologics for children to the market, pharmaceutical companies received financial benefits. These laws resulted in critical changes in drug labeling for pediatric patients because unique pediatric dosing is often necessary because of the growth and maturational stages of pediatric patients [31]. Furthermore, since the implementation of these incentive programs, a majority of biologics (vaccines, anti-toxins, and insulin) approved include pediatric information in their labeling [32].

A major reason for concern with financial incentives for pharmaceutical companies to engage in pediatric clinical research is monetary. There is concern that PHARMA could reap great financial reward from the patent extensions, and many question the ethics of this in return for doing what is deemed to be the morally correct thing to do [28]. Complicating the debate is the fact that the financial data is mixed. Li et al. [28] performed a cohort study of nine drugs granted pediatric exclusivity. They found that exclusivity did not guarantee a financial windfall with the distribution of net economic return for 6 months of exclusivity varying widely [net return ranged from (−)$8.9 million to (+)$507.9 million] [28]. However, at times, it appears that the financial incentives provided are disproportionate to the cost of the research being done because the profit ratio for certain blockbuster medications (anti-hypertensives), can be as high as 17:1 [28, 33, 34, 35].

Within 10 years of implementation of the Pediatric Exclusivity Act in 1997, there were more than 300 studies conducted and more than 115 products with labeling changes to account for pediatric use [31, 33, 34]. However, many of the drugs studied were not considered important targets among the pediatric population [33, 34]. The pediatric exclusivity studies have also tended to focus on more profitable drugs and the drugs most frequently studied are more likely to be important targets for the adult population [33, 34, 36]. The literature also raises important concerns related to the quality of the clinical trials conducted under the exclusivity act [37]. The extensions for patents under the exclusivity act are granted regardless of the quality or outcome of the clinical trial and many of the results of studies done under the exclusivity act are not published [38].

The data on the use of financial incentives to increase clinical trial diversity is mixed with evidence of success in terms of new drug labeling and evidence of failure in terms of poor research quality and financial gains for industry. However, financial incentives to increase diversity in clinical trials may be a viable and aggressive means of eliminating these health disparities if implemented in a thoughtful and ethical way. Men make up more than two-thirds of the population in clinical tests of cardiovascular devices [10]. African Americans and Hispanics respectively make up 12% and 16% of the US population. However, African Americans and Hispanics respectively make up only 5% and 1% of clinical trial participants [10]. There are clear benefits and costs associated with the use of financial incentives. These must be carefully considered and weighed when implementing any policy concerning financial incentives to increase clinical trial diversity.

An alternative to financial incentives to improve clinical trial diversity would be a stronger stance by governing and regulating bodies such as the NIH and FDA. The FDA recently implemented the “Food and Drug Administration Safety and Innovation act” (FDASIA). Section 907 of this act directs the FDA to investigate how well demographic subgroups (sex, age, race and ethnicity) in applications for medical are included in clinical trials; and if subgroup-specific safety and effectiveness data are available. While this does not create a mandate for inclusion like the NIH revitalization act of 1993, it is an important step forward in improving clinical trial accrual of under-represented populations. But, given the size of the disparity, a more aggressive stance is required. Laws implemented by the NIH and FDA 10 years ago would have had potentially significant effects on diversity within clinical trials because the government (NIH) was the major funder of clinical trials at that time. However, the major funder of clinical trials in the United States today is the pharmaceutical industry. If the goal is to create policies to change research behaviors in clinical trials today, those policies must take a stronger stance in terms of requirements for new drug approval or creatively and ethically consider what is valued by the main funder of clinical trials; industry. The literature has previously discussed in great detail the issue of financial incentives to increase clinical trial diversity. Now is the time to begin a discourse on the role of an FDA mandate on clinical trial diversity.

A mandate on diversity by the FDA would be a powerful motivational factor for the pharmaceutical industry to increase clinical trial diversity. A FDA mandated minimum level of diversity (or diversity benchmark) within clinical trials could be applied to research within areas of medicine where there are known issues in terms of health disparities (hepatitis C, cardiovascular disease, diabetes). Through the creation of an expert panel, the need for specific emphasis on diversity within certain populations could be assessed. For example, an FDA or industry appointed expert panel on cardiovascular disease could mandate that phase 3 clinical trials on hypertension medications need to establish a minimum level of diversity in terms of African American participation, given the disparities that exist in hypertension control among African Americans. Those trials who successfully reach the diversity benchmark could be eligible for an expedited approval process and those that do not reach the benchmark would receive the equivalent of a black box warning concerning the lack of data in diverse populations. Exceptions could be made in those scenarios where earnest attempts at accruing diverse populations were attempted but unsuccessful.

The benefits and risks of improving diversity by any mandate or benchmark would have to be weighed. There is a potential for impeding clinical research due to the time required to achieve diversity. Furthermore, financial resources would be required to invest in the accrual of diverse populations. The expertise of thought leaders would be key to identifying those clinical trials which should have to reach diversity benchmarks as well as what precisely those diversity benchmarks should be. But achieving a consensus on what types of clinical trials require diversity benchmarks and what those benchmarks should be would require close collaboration between regulatory entities (FDA) and the pharmaceutical industry. Although there would be pros and cons with the implementation of such a policy, previous policies which have taken a less assertive stance have not addressed this important issue and the pharmaceutical industry has not shown a propensity for addressing the issue on their own.

Finally, a core ethical issue in clinical research is justice. The lack of diversity in clinical trial participation represents an injustice. The clinical trial population should reflect the population of people who are affected by the disease being studied. Not doing so risks the possibility new therapies are only efficacious and safe in a small proportion of the population at risk for the disease. Or worse, as has historically been the case in hepatitis C, clinical trials could lack a reasonable representation of those individuals at greatest risk for the disease and who have the worse outcomes (African Americans). This system creates lower quality research and the knowledge gained from these clinical trials has less value when compared to novel findings in a clinical trials that truly reflected the genetic diversity of the disease process being studied.

As has been the case in US history, to address social injustices, we must implement an aggressive policy to ensure that significant and positive progress is achieved. Here, we have discussed the potential for financial incentives as well as a FDA mandate in conjunction with appropriate product labeling to increase diversity in clinical trials. We urge a discourse on these issues because industry is now the major funder of clinical trials today and previous policies to address diversity in clinical trials have failed. Any cost associated with financial incentives or an FDA mandate pales in comparison to the cost of life lost because of unjust and inferior clinical research trials.

References

  1. 1. Colon-Otero G, Smallridge RC, Solberg LA, Jr., Keith TD, Woodward TA, Willis FB, et al. Disparities in participation in cancer clinical trials in the United States: A symptom of a healthcare system in crisis. Cancer. 2008;112(3):447-454
  2. 2. Kwiatkowski K, Coe K, Bailar JC, Swanson GM. Inclusion of minorities and women in cancer clinical trials, a decade later: Have we improved? Cancer. 2013;119(16):2956-2963
  3. 3. Shepard CW, Finelli L, Alter MJ. Global epidemiology of hepatitis C virus infection. The Lancet Infectious Diseases. 2005;5(9):558-567
  4. 4. Armstrong GL, Wasley A, Simard EP, McQuillan GM, Kuhnert WL, Alter MJ. The prevalence of hepatitis C virus infection in the United States, 1999 through 2002. Annals of Internal Medicine. 2006;144(10):705-714
  5. 5. NIH consensus statement on management of hepatitis C: 2002. NIH Consensus and State-of-the-Science Statements. 2002;19(3):1-46
  6. 6. Alter MJ, Kruszon-Moran D, Nainan OV, McQuillan GM, Gao F, Moyer LA, et al. The prevalence of hepatitis C virus infection in the United States, 1988 through 1994. The New England Journal of Medicine. 1999;341(8):556-562
  7. 7. Jacobson IM, McHutchison JG, Dusheiko G, Di Bisceglie AM, Reddy KR, Bzowej NH, et al. Telaprevir for previously untreated chronic hepatitis C virus infection. The New England Journal of Medicine. 2011;364(25):2405-2416
  8. 8. Poordad F, McCone J, Bacon BR, Bruno S, Manns MP, Sulkowski MS, et al. Boceprevir for untreated chronic HCV genotype 1 infection. The New England Journal of Medicine. 2011;364(13):1195-1206
  9. 9. Wilder J, Saraswathula A, Hasselblad V, Muir A. J Natl Med Assoc. 2016 Feb;108(1):24-9. doi: 10.1016/j.jnma.2015.12.004
  10. 10. (OMH) FsOoMH 2013; http://www.fda.gov/ForConsumers/ConsumerUpdates/ucm347896.htm2013
  11. 11. Schoen C, Doty MM, Collins SR, Holmgren AL. Insured but not protected: How many adults are underinsured? Health Affairs (Millwood). 2005;Suppl Web Exclusives:W5-289-W5-302
  12. 12. Ford JG, Howerton MW, Lai GY, Gary TL, Bolen S, Gibbons MC, et al. Barriers to recruiting underrepresented populations to cancer clinical trials: A systematic review. Cancer. 2008;112(2):228-242
  13. 13. Advani AS, Atkeson B, Brown CL, Peterson BL, Fish L, Johnson JL, et al. Barriers to the participation of African-American patients with cancer in clinical trials: A pilot study. Cancer. 2003;97(6):1499-1506
  14. 14. Lara PN, Jr., Paterniti DA, Chiechi C, Turrell C, Morain C, Horan N, et al. Evaluation of factors affecting awareness of and willingness to participate in cancer clinical trials. Journal of Clinical Oncology. 2005;23(36):9282-9289
  15. 15. Trauth JM, Jernigan JC, Siminoff LA, Musa D, Neal-Ferguson D, Weissfeld J. Factors affecting older African American women's decisions to join the PLCO Cancer Screening Trial. Journal of Clinical Oncology. 2005;23(34):8730-8738
  16. 16. Komaromy M, Grumbach K, Drake M, Vranizan K, Lurie N, Keane D, et al. The role of black and Hispanic physicians in providing health care for underserved populations. New England Journal of Medicine. 1996;334(20):1305-1310
  17. 17. Mandelblatt J, Kaufman E, Sheppard VB, Pomeroy J, Kavanaugh J, Canar J, et al. Breast cancer prevention in community clinics: Will low-income Latina patients participate in clinical trials? Preventive Medicine. 2005;40(6):611-618
  18. 18. Moinpour CM, Atkinson JO, Thomas SM, Underwood SM, Harvey C, Parzuchowski J, et al. Minority recruitment in the prostate cancer prevention trial. Annals of Epidemiology. 2000;10(8 Suppl):S85-S91
  19. 19. Murthy VH, Krumholz HM, Gross CP. Participation in cancer clinical trials: Race-, sex-, and age-based disparities. JAMA. 2004;291(22):2720-2726
  20. 20. Kimmick GG, Peterson BL, Kornblith AB, Mandelblatt J, Johnson JL, Wheeler J, et al. Improving accrual of older persons to cancer treatment trials: A randomized trial comparing an educational intervention with standard information: CALGB 360001. Journal of Clinical Oncology. 2005;23(10):2201-2207
  21. 21. Howerton MW, Gibbons MC, Baffi CR, Gary TL, Lai GY, Bolen S, et al. Provider roles in the recruitment of underrepresented populations to cancer clinical trials. Cancer. 2007;109(3):465-476
  22. 22. Pinto HA, McCaskill-Stevens W, Wolfe P, Marcus AC. Physician perspectives on increasing minorities in cancer clinical trials: An Eastern Cooperative Oncology Group (ECOG) Initiative. Annals of Epidemiology. 2000;10(8 Suppl):S78-S84
  23. 23. Nguyen TT, Somkin CP, Ma Y. Participation of Asian-American women in cancer chemoprevention research: Physician perspectives. Cancer. 2005;104(12 Suppl):3006-3014
  24. 24. McCarthy CR. Historical background of clinical trials involving women and minorities. Academic Medicine. 1994;69(9):695-698
  25. 25. Thomas SB, Quinn SC. The Tuskegee Syphilis Study, 1932 to 1972: Implications for HIV education and AIDS risk education programs in the black community. American Journal of Public Health. 1991;81(11):1498-1505
  26. 26. United States. Congress. House. Committee on Energy and Commerce. Subcommittee on Health and the Environment. NIH Revitalization Act : hearing before the Subcommittee on Health and the Environment of the Committee on Energy and Commerce, House of Representatives, One Hundred Third Congress, first session, on H.R. 4, a bill to amend the Public Health Service Act … February 3, 1993. Washington: U.S. G.P.O.: For sale by the U.S. G.P.O., Supt. of Docs., Congressional Sales Office; 1993
  27. 27. Chen MS, Jr., Lara PN, Dang JH, Paterniti DA, Kelly K. Twenty years post-NIH Revitalization Act: Enhancing minority participation in clinical trials (EMPaCT): Laying the groundwork for improving minority clinical trial accrual: Renewing the case for enhancing minority participation in cancer clinical trials. Cancer. 2014;120(Suppl 7):1091-1096
  28. 28. Li JS, Eisenstein EL, Grabowski HG, Reid ED, Mangum B, Schulman KA, et al. Economic return of clinical trials performed under the pediatric exclusivity program. JAMA. 2007;297(5):480-488
  29. 29. United States. Food and Drug Administration. Food and Drug Administration Modernization Act of 1997 : FDA Plan for Statutory Compliance. Rockville, MD: FDA; 1998
  30. 30. Administration. USFaD. Best Pharmaceuticals for Children Act of 2002. Rockville, MD: FDA; 2002
  31. 31. Rodriguez W, Selen A, Avant D, Chaurasia C, Crescenzi T, Gieser G, et al. Improving pediatric dosing through pediatric initiatives: What we have learned. Pediatrics. 2008;121(3):530-539
  32. 32. Field MJ, Ellinger LK, Boat TF. IOM Review of FDA—Approved biologics labeled or studied for pediatric use. Pediatrics. 2013;131(2):328-335
  33. 33. Kesselheim AS. Using market-exclusivity incentives to promote pharmaceutical innovation. New England Journal of Medicine. 2010;363(19):1855-1862
  34. 34. Luo J, Kesselheim AS. Underrepresentation of older adults in cancer trials. JAMA. 2014;311(9):965-966
  35. 35. Baker-Smith CM, Benjamin DK, Jr., Grabowski HG, Reid ED, Mangum B, Goldsmith JV, et al. The economic returns of pediatric clinical trials of antihypertensive drugs. American Heart Journal. 2008;156(4):682-688
  36. 36. Boots I, Sukhai RN, Klein RH, Holl RA, Wit JM, Cohen AF, et al. Stimulation programs for pediatric drug research—Do children really benefit? European Journal of Pediatric. 2007;166(8):849-855
  37. 37. Benjamin DK, Jr., Smith PB, Jadhav P, Gobburu JV, Murphy MD, Hasselblad V, et al. Pediatric antihypertensive trial failures: Analysis of end points and dose range. Hypertension. 2008;51(4):834-840
  38. 38. Benjamin DK, Jr., Smith PB, Murphy MD, Roberts R, Mathis L, Avant D, et al. Peer-reviewed publication of clinical trials completed for pediatric exclusivity. JAMA. 2006;296(10):1266-1273

Written By

Julius M. Wilder

Submitted: 13 December 2016 Reviewed: 21 June 2017 Published: 09 May 2018