Open access peer-reviewed chapter

Challenges of Patient Selection for Phase I Oncology Trials

By Mark Voskoboynik and Hendrik-Tobias Arkenau

Submitted: June 10th 2014Reviewed: October 23rd 2014Published: June 3rd 2015

DOI: 10.5772/59712

Downloaded: 4083

1. Introduction

The modern era of oncology has seen an enormous increase in the number of therapeutic agents being tested on cancer patients with a broad variety of mechanisms of action, indications and rationale for their use. For an oncology drug to gain approval by the relevant government drug authority such as the United States Food and Drugs Administration (FDA) or the European Medicines Agency (EMA) it must demonstrate adequate safety and efficacy as well as have a favourable risk-benefit profile. As a result, all new oncology drugs must go through a process of investigation, usually beginning with pre-clinical laboratory and animal testing all the way through to the required clinical trials. The average time taken for a new drug to progress through clinical testing until the time it is approved is approximately 7.6 years [1]. The costs of drug development are large, estimated at up to $1 billion per approved drug and ever-increasing, placing an increasing burden on health care services [2, 3]. This has tremendous impacts on the cost of health-care provision with novel anticancer drugs often coming with a large price tag of more than $10,000 per month of treatment [4, 5]. Offsetting these costs are the tremendous improvements in patient outcomes that have been made in recent times with targeted therapies such as imatinib, trastuzumab and crizotinib to name just a few [6-8]. Patient selection for early phase oncology trials is of utmost importance because of the cascading effects it has on subsequent drug development and a drug’s ultimate success as a safe, beneficial and cost-effective treatment.

2. Oncology clinical trials — Background

New drugs are developed in a sequential and rationale manner from the moment of their discovery in the pre-clinical phase through the various phases of clinical trials hopefully leading to their ultimate approval and availability for patients.

2.1. Clinical trial phases

Drugs are developed through several phases of clinical trials with each phase designed to answer specific questions and meet various endpoints (Figure 1). Each clinical trial phase can take a variable period of time to complete depending on the treatment setting, particular indication, trial drug and overall patient accrual rates. Each trial phase has specific challenges although a detailed discussion with regards to phase II and III trials is beyond the scope of this chapter.

Phase I trials, including first-in-human (FIH) trials, focus on a small group of patients to attempt to define the safety and tolerability of a particular treatment as well as the optimal dose, usually called the maximum tolerated dose (MTD).

Phase II trials use the information garnered from the Phase I trial, particularly in terms of appropriate dosing and sometimes with regards to patient selection, and a particular treatment is investigated on a larger number of patients, often with more specific disease characteristics than in Phase I patients. Increasingly so, phase II trials have multiple arms and the focus of these trials is on demonstrating a signal for treatment efficacy and consolidating the early safety data yielded from the phase I trial.

If a strong signal for the effectiveness of the trial treatment is obtained from the phase II trial, an even larger phase III trial is then conducted. Phase III trials are designed to establish the efficacy, or lack thereof, a trial treatment. As a result, a larger patient population is required and the trial treatment is compared with an established standard-of-care treatment or placebo if there is no standard treatment available to the particular patient group. Usually, it is the data and results from this study that will be relied upon to gain drug approval.

Figure 1.

The standard pathway for oncology drug development, including the various clinical trial phases.

2.2. Phase I clinical trials

The primary aim of a phase I oncology clinical trial is to identify the maximum-tolerated dose (MTD) of a particular drug, defined as the dose level where greater than one-third of patients treated experience a dose-limiting toxicity (DLT). This allows the identification of the optimal and safe drug dose to take forward for further drug development – this is called the recommended phase II dose (RP2D). For cytotoxic drugs, the RP2D is usually the highest dose that can be delivered without exposing patients to unacceptable levels of toxicity. For targeted drugs, the dose of the drug that causes a treatment response and clinical activity may be very different to the MTD [9]. An important component of phase I trials is to provide patients with a safe treatment at doses that are as close to therapeutic as possible. There are often multiple secondary endpoints in phase I trials including the tolerability, response to treatment, pharmacokinetics of the study drug(s) and pharmacodynamics.

The aims of phase I oncology trials clearly impact significantly upon trial design. The intention of phase I trial design is to minimise the number of patients exposed to either sub-therapeutic drug doses or severe treatment-related toxicity. With phase I trial design there is a basic tension between escalating drug doses too quickly, exposing more patients to toxicity, and escalating too slowly, exposing patients to doses of treatment that are sub-therapeutic and ineffective [10]. The design of phase I trials is therefore critical in order to minimise the risk of a negative outcome for each individual patient and at the same time controlling the number of patients required for trial enrolment in order to accomplish the aims of the trial.

The traditional design incorporating the ‘3 + 3’ method of dose escalation is still the most widely used. This involves treating 3 patients at a time at each dose-level. The dose level should be increased for each subsequent group of 3 patients until at least 1 dose limiting toxicity (DLT) occurs in a group of 3 patients. If only 1 of 3 patients have a DLT, a further 3 patients are treated at the same dose level. If 2 or more DLTs occur per dose level, the dose escalation is stopped and occasionally a further 3 patients are treated at the dose level below. The highest dose level (where ≤1 out of 6 patients had a DLT) below the maximally administered dose had a DLT is considered the MTD and the RP2D. Dose levels are traditionally defined using a Fibonacci dose escalation whereby dose is increased by increments of 100%,67%, 50%, 40% followed by 33% for all subsequent levels.

Novel designs include accelerated titration designs, continual reassessment method and adaptive trial designs [11]. Accelerated titration phase I trial designs attempt to make phase I trials more efficient and more accurate when determining the RP2D. A group of proposed accelerated titration designs were developed by Simon and colleagues in 1997 [12]. The key features of these designs are rapid dose escalation, intra-patient dose escalation and the ability to analyse trials using a dose-toxicity model [11]. The most popular of these designs, ‘design 4’, starts with an initial accelerated phase that doubles the dose at each dose level, comprising of single patients. When the first DLT is experienced, the cohort for that particular dose level is expanded to include 3 patients and subsequent to this, standard phase I dose escalation and design is employed. This design also allows intra-patient dose escalation if a particular patient has no toxicity at their current dose. By use of a simulated phase I trial, as presented by Simon and colleagues in this paper, there was a significant reduction in the number of patients that were treated at sub-therapeutic doses, without a significant increase in the proportion of patients exposed to significant treatment toxicity. Other adaptive trial design methods exist, including the continual reassessment method (CRM) introduced by O’Quigley and colleagues in 1990 [13].

The potential benefits of novel designs are that fewer patients are treated with sub-therapeutic or nontherapeutic doses however they do not seem to have the reduced the number of patients that are enrolled onto trials. By adapting these novel trial designs, the most appropriate recruitment structure can be used for the particular drug under investigation.

3. Patient selection for clinical trials

3.1. Estimation of prognosis in oncology

Selecting the appropriate patient for early phase clinical trials is a fundamental component of any clinical trial design. A key component, in particular for phase I trials, but probably true for all phases of drug development, is the assessment of an individual patients prognosis. Many patients being considered for enrolment onto a phase I oncology trial have had progressive metastatic disease through all standard lines of treatment and often have a limited life expectancy. Standard trial eligibility criteria have been designed largely with the intention of minimising the number of patients enrolled onto these studies that have a poor prognosis and a greater potential for toxicity.

Estimating prognosis is inherently challenging for a clinician and estimates are often made based on intuition and experience rather than in a scientific or an evidence-based manner. Physicians’ often make inaccurate estimates of a patient’s prognosis, usually by being optimistic and overestimating survival. Various studies have shown that overestimation of the prognosis of terminal patients can be up to 5 times longer than their actual survival, exemplifying just how difficult making these estimates are [14-16].

Routine trial eligibility criteria include good performance status, adequate organ function (including haematological, kidney and liver etc.), and typically an anticipated life expectancy of greater than 12 weeks. Performance status is most commonly assessed using the Eastern Cooperative Oncology Group (ECOG) performance status score that is a score graded 0-5 [17]. It is a validated assessment of a patients ability to perform routine activities of daily living. A performance status (PS) of 0 indicates that the patient is fully active, ambulant and able to carry all activities without restriction whereas a PS of 5 is applied to a patient that is deceased. Most trials would permit a patient with a PS of between 0 and 1 or 2 (partially restricted in physical activity (PS = 1) and unable to carry out work activities but remain ambulatory and self-caring (PS = 2)). Another commonly used assessment score for performance status is the Karnofsky Performance Status (KPS) score which has more specific gradations between 0% (dead) and 100% (asymptomatic without complaints) in 10% increments [18, 19].

It is well documented that patients with a poorer performance status have an inferior prognosis overall when using either the KPS or ECOG PS [20-23]. It has been shown that an ECOG PS of 3 indicates a prognosis of less than 3 months and a PS of 4 of less than 1 month.

A number of other factors can be used to predict the prognosis of oncology patients. For example, the primary malignancy impacts greatly on the prediction of patient survival. For instance, patients diagnosed with metastatic carcinoid tumours often survive for a number of years compared to patients with metastatic pancreatic cancer who have a median overall survival of less than 12 months even with the best treatment currently available [24, 25]. Various laboratory data has also been associated with a poor prognosis such as hypoalbuminaemia, raised inflammatory markers (including leukocytosis, raised C-reactive peptide (CRP), lymphopenia and certain metabolic abnormalities such as hypercalcaemia [26, 27].

A number of tumour-specific prognostic scores have been developed to help stratify patients into various treatments based on their risk. A good illustrative example of this is the Memorial Sloan-Kettering Cancer Center (MSKCC) risk criteria for metastatic clear cell renal cell carcinoma. These criteria, published by Motzer and colleagues in 1999, were developed with five pre-treatment features that were associated with a shorter patient survival [28]. The five prognostic factors were low performance status (KPS <80%), high serum lactate dehydrogenase (>1.5 times upper limit of normal), low haemoglobin (< lower limit of normal), high corrected serum calcium (>10 mg/dL) and absence of prior nephrectomy. Patients with three or more risk factors were considered to be in a ‘poor risk’ category with a median survival time of 4 months compared to the ‘favorable-risk category containing patients with zero risk factors and a median survival of 20 months.

A large number of molecular tests have been shown to have prognostic value in various malignancies. Examples of molecular results that confer a poorer prognosis in advanced cancers include BRAF mutations in colorectal cancer and melanoma, human epidermal growth factor receptor 2 (HER2) positivity in breast cancer and phosphatidylinositol-3,4,5-triphosphate 3-phosphatatse (PTEN) deficiency in prostate cancer to name a few [29-32]. Interestingly, recent developments in targeted therapies have led to the use of newer agents directed against some of these molecular or genetic abnormalities, in some cases resulting in significant improvements in prognosis and overcoming the negative prognostic implications of the result in the first place.

In addition to various clinical, molecular and genetic factors, circulating tumour cells (CTCs) can be used and have been shown to be prognostics in a variety of malignancies. CTCs are shed from solid tumours in to the circulation. Recent improvements in technology has led to a variety of laboratory methods that are able to effectively detect and isolate these CTCs which are usually very rarely found in the circulation. The CellSearch System® (Janssen Diagnostics, Inc) is the most frequently used and is the only FDA cleared device for measuring CTCs as an aid for clinicians treating patients with prostate, breast and colon cancers. A series of key studies conducted approximately 10 years ago showed that patients with higher levels of CTCs in the circulation had a poorer prognosis. For example, patients with castrate refractory prostate cancer and a CTC count lower than 5 / 7.5mL had a median overall survival of 21.7 months compared to patients with a CTC count greater than 5 that had a survival of 11.5 months [33]. Similar results were seen in patients with breast and colon cancer whereby a particular cut-point of CTC counts could be used to clearly differentiate patients with a favourable prognosis from those patients with an unfavourable prognosis [34, 35]. Although there are currently limitations with the technology and its implementation into routine clinical practice, this field is developing rapidly and will likely play a part in patient selection in the future.

Overall, the prediction of a patient’s individual prognosis is challenging, complex and involves a variety of factors related to their clinical situation, the characteristics of the cancer, the molecular biology of the cancer as well as other factors including circulating tumour cells (Figure 2).

Figure 2.

Factors impacting on optimal patient selection for phase I clinical trials in oncology.

3.2. Phase I patient selection criteria

A number of prognostic scores have been published in recent times with the aim of improving patient selection for phase I clinical trials [36-40]. Probably the most important of these scores and the only one that has been prospectively validated is the Royal Marsden Hospital (RMH) score (see Table 1).

Arkenau and colleagues initially performed a retrospective analysis of 212 patients that were enrolled onto phase I trials and reviewed their demographic data as well as a number of clinical and analytic variables [38]. Using a multivariate analysis model, three independent variables were determined to be associated with a poor overall survival – an elevated lactate dehydrogenase [LDH] (above the upper limit of normal [ULN]), low albumin (< 35 g/L), and more than two sites of metastases. Using these variables, a score was developed that could separate these patients broadly into two groups, those patients with a good prognosis (RMH score 0 to 1) and those with a poor prognosis (RMH score 2 to 3). This retrospective analysis subsequently led to the prospective validation of this score in a separate publication by the same group. In this validation study, 78 prospective patients that were treated within one of 19 phase I trials were evaluated [36]. 95% of patients had an ECOG performance score of 0 to 1 and were mostly treated with new biologic agents (68%) with a minority (32%) receiving cytotoxic-based treatment. The patients had a broad range of malignancies with over 80% having gastrointestinal, breast, gynecologic, sarcoma or urologic cancers. All patients were required to have evidence of disease progression before study entry. Five patients had a partial response to their treatment (7.5%) and fourteen (17.9%) achieved stable disease at three months as per the Response Evaluation Criteria in Solid Tumours (RECIST). The median overall survival for the entire study population was 27.1 weeks with an OS of 33.0 weeks for patients with a score of 0 to 1, compared to an OS of 15.7 weeks with a score of 2 to 3 (P=0.036). These findings represent the first prospectively validated prognostic score that might assist in the optimal selection of patients for entry into phase I trials.

The RMH prognostic score was further validated at the MD Anderson Cancer Center in their Phase I trials patients [41]. They retrospectively reviewed 229 consecutive patients with lung, pancreatic and head and neck tumours that were treated on 57 phase I trials. They applied the RMH score to these patients and found that the patients with a good RMH prognostic score had a longer median survival than those with a poor prognostic score (33.9 weeks vs 21.1 weeks, P<0.0001). The authors of this study therefore showed that the RMH prognostic score can be accurately applied to patients treated at a separate institution across a range of malignancies treated in a range of trials.

Stavraka and colleagues used a multivariate approach to attempt to identify variables that predicted survival in patients referred to the phase I oncology unit at their institution to devise the Hammersmith Score (HS) [37]. Analyses were carried out on 118 patients with 52 patients (44%) treated in one of 7 phase I trials. Of these patients that actually entered a study, only 1 (2%) had a partial response and 15 (28%) had stable disease. The median OS in patients that entered a study was 22 weeks compared to 11 weeks in those patients that did not enter a study. The multivariate analysis identified four independent negative predictive factors for OS – albumin <35 g/dL (P=0.01), LDH >450 IU/dL (P<0.001), sodium <135 mmol/dL (P=0.06) and ECOG PS≥2 (P=0.04). Based on three of these variables, excluding ECOG PS, a scoring system was devised to stratify patients into either low-risk (HS score 0 to 1) or high-risk groups (HS score 2-3). Patients in the low risk group had a median OS of 31.2 weeks compared to a median OS of 8.9 weeks in the high-risk group (P<0.001).

Chau and colleagues from the Princess Margaret Hospital in Toronto, Canada, assessed 17 potential clinical characteristics in 233 patients enrolled in phase I trials to create their own risk score – the Princess Margaret Hospital Index (PMHI) [39]. In their cohort of patients, the median overall survival was 320 days, significantly longer than the 27 weeks reported by Arkenau for the Royal Marsden group. In the multivariate analysis they found that high LDH (p=0.001), > 2 metastatic sites (p=0.004) and ECOG PS > 0 (p=0.05) were significantly associated with OS. They found that 3 variables were associated with 90-day mortality – albumin <35g/L (p=0.008), > 2 metastatic sites (p=0.02) and ECOG >1 (p=0.001). A single point was assigned to each of these variables and patients with a PMHI score of 0-1 had lower 90-day mortality rate compared to patients with a score of 2-3 (7% and 37% respectively).

A large, European, multicenter study was designed to generate and validate a prognostic model for 90-day mortality which is a common eligibility criterion in phase I oncology trials [42]. Data from 2,232 patients enrolled in phase I trials across 14 oncology units was evaluated. The median overall survival was 38.6 weeks with a 90-day mortality rate of 16.5%. Two prognostic models were derived using a variety of variables including ECOG performance status, albumin, LDH, alkaline phosphatase, number of metastatic sites, lymphocytes, white cell count and time per treatment index (TPTi). TPTi is a log ratio of the time interval between diagnosis of advanced / metastatic and phase I trial entry over the number of lines of systemic treatment. The most predictive combination of variables includes albumin, LDH with ECOG PS or number of metastatic sites – similar to the RMH score. When compared to the RMH score using receiving operator characteristic (ROC) curves, there were no statistically significant differences seen. When the two models derived in this study (models A and B) were applied to patients with PS 0 to 1, patients with higher scores identified patients with OS of less than 11 weeks. When the RMH score was used to define the poorest risk group, their median OS was 14.6 weeks. The prognostic score (derived from model B – ‘European score B’) was assessed for its performance on a group of 200 patients that were eligible for phase I trials (PS 0-1) and it reduced the 90-mortality by half and the total number of patients recruited by 20%. This score performed almost identically to the RMH score when applied to the same population of patients.

Along a similar but distinct line, prediction models have been devised to try to identify patients at particularly high risk of drug toxicity. In one additional attempt to improve prediction of a patients risk for serious drug-related toxicity (SDRT), a nomogram was developed by Hyman and colleagues [43]. The data from 3,104 patients treated in 127 trials sponsored by the National Cancer Institute Cancer Therapeutics Evaluation Program (CTEP) between 2000 and 2010 was used for the derivation of a nomogram that could potentially estimate a patient’s risk for developing serious toxicity. Data was from a large, prospectively maintained database. Trials that were evaluating cytotoxic or molecularly targeted agents were included. Standard phase I eligibility criteria were used to select appropriate patients. SDRT was defined as a grade ≥ 3 non-haematologic or grade ≥ 4 haematologic toxicity attributed to study treatment which is similar to the definition of a dose-limiting toxicity used by the majority of phase I trials. 728 patients (23.5%) experienced a SDRT and a total of 13 (0.4%) patients died as a result of drug related toxicity. Several factors were found to be reliable (p<0.10) predictors of serious drug-related toxicities in cycle one of trial treatment. Using a variety of statistical methods, a nomogram was built incorporating ECOG PS, WBC, creatinine clearance, albumin, aspartate transaminase (AST), number of study drugs and agent type (biologic or nonbiologic). This nomogram was independently validated using an independent data set of 234 patients. The authors concluded that by using their nomogram, improvements can be made in patient selection for phase I trials, in particular by prospectively identifying patients at high risk for drug toxicity.

There are a number of older retrospective series published and in a systematic review, Ploquin and colleagues summarised the published literature regarding prognostic models for life expectancy of patients enrolled in phase I trials up till the end of 2009 [44]. Nine publications were identified with all of them being retrospective analyses, except for the RMH score by Arkenau and colleagues in 2009 as described previously [36, 45-51]. Most of these studies fairly consistently identified that patients at greatest risk of death included those with a poorer performance status and greatest tumour volume (for example: increased number of metastatic sites, raised LDH). A consistent limitation of these studies, with one notable exception, is the use of retrospective data almost all were from single centre series which limits the generalizability of these studies.

One of the standard inclusion criteria in clinical trials is a life expectancy of greater than 12 weeks so it would be interesting to see how these scores might apply for this particular inclusion criterion. The RMH score shows that patients in the good prognosis group have a survival of 33 weeks, exceeding this particular criteria but even the median survival of the poor prognosis group exceeds the 12-week threshold (15.7 weeks). The patients in the high-risk group as defined by the HS, albeit retrospectively, had a survival shorter than 12 weeks (8.9 weeks) so potentially patients that fall into this category can be excluded from phase I trials.

With a number of scores now published it is unclear which of these is superior in predicting patient survival and this would require a separate prospective trial to clarify this question. As it currently stands, the RMH score is the only score that has been prospectively validated and therefore has the strongest evidence supporting its use. This is reflected in its more widespread use however further research is required to validate its utility in sites outside the Royal Marsden Hospital and indeed across other countries and across a broad cross-section of patient populations.

ScoreProspective validationParametersOverall Survival (weeks)P-valueHR
Royal Marsden Hospital Score [36]
Arkenau 2008
YesLDH (>ULN) = 1
Albumin (<35 g/L) = 1
> 2 sites of metastases = 1
Score 0-1: 33.0
Score 2-3: 15.7
0.0361.4
Hammersmith Score [37]
Stavraka 2014
NoLDH >450 IU/dL = 1
Albumin <35 g/dL = 1
Sodium <135 mmol/dL = 1
Score 0-1: 31.2
Score 2-3: 8.9
<0.001
Princess Margaret Hospital Index [39]
Chau 2011
NoHigh LDH
>2 metastatic sites
ECOG PS > 0
European Model B [42]
Olmos 2012
NoAlbumin <35 g/dL = 1
LDH (>ULN) = 1
≥ 3 sites of metastases = 1
Low TPTi (<24 weeks/treatment) = 1
Increased ALP (>ULN) = 1
Low lymphocyte count (<18%) = 1
High WBC (>10,500/uL)
Score 0: 141
Score 1: 61
Score 2: 54
Score 3: 37
Score 4: 29
Score 5: 21
Score 6: 11
Score 7: 10
0.036 (log-rank)-
2.00
2.54
3.24
4.57
6.20
14.1
14.1

Table 1.

Key publications of prognostic scores for phase I oncology trial patients.

Abbreviations: LDH = lactate dehydrogenase; PS = performance status; TPTi = time per treatment index, a log ratio of the time interval between diagnosis of advanced / metastatic and phase I trial entry over the number of lines of systemic treatment; ULN = upper limit of normal; WBC = white blood cell count


4. Challenges

4.1. Impact of novel agents — Targeted therapies

Conducting phase I oncology trials has a number of inherent challenges. Traditionally, patients enrolled into these trials have exhausted all prior standard available therapies and by virtue of them having an incurable malignancy and very limited future treatment options, have a shortened life expectancy. There has therefore existed a paradoxical situation whereby the ideal patient has an advanced and often heavily pre-treated cancer but requires a prognosis that is suitable for the exposure of a novel investigational drug. The landscape in this field has shifted over recent years with phase I trials increasingly investigating targeted, biological agents as opposed to cytotoxic agents. The result of this change is that patients are being enrolled into phase I trials earlier in their disease course, including in the first line setting. The implications of these changes in practice will in some ways make the task of predicting a patient’s prognosis slightly more straightforward as they are earlier in their disease course. On the other hand, predicting the prognosis of treatment-naïve patients might be more unpredictable as the natural history of their cancer has not been allowed enough time to be established.

Further complicating the situation is the advent of targeted agents that have rapid and significant responses. Examples of this include crizotinib in ALK-rearranged metastatic lung adenocarcinoma, vemurafenib in metastatic melanoma harbouring BRAF V600 mutations, EGFR inhibitors such as gefitinib and erlotinib in EGFR mutated lung cancer and idelalisib, a PI3K-delta inhibitor, for indolent lymphomas [7, 52-54]. These new agents increasingly have biomarkers that strongly predict for a response to treatment that can be quite rapid. The presence of these predictive biomarkers might mean that patient selection could potentially be relaxed because of the higher likelihood of a response. An illustrative example of this is the use of EGFR inhibitors in non-small-cell lung cancer. In the initial, large phase III trial of erlotinib, an EGFR inhibitor, compared to placebo in previously treated metastatic non-small-cell lung cancer, an unselected group of patients were treated [55]. The objective response rate in the erlotinib group was only 8.9% although it did result in an improved overall survival in this group of patients. This was an important trial but it certainly did not represent a large step forward for this group of patients. At the same time as these EGFR inhibitors were being developed, it was becoming apparent that the presence of an EGFR mutation, either a base-pair deletion at exon 19 or a point mutation at exon 21 (L858R) predicted for a good response to these targeted agents [56]. Mok and colleagues subsequently published a trial comparing another EGFR inhibitor, gefitinib, to chemotherapy in patients with metastatic adenocarcinoma [54]. In this trial, patients found to be harbouring an EGFR mutation had a 71% response rate to gefitinib, far higher than the 8.9% response rate seen in an unselected group of patients.

The advent of these targeted agents and their improved response rates and often improved safety profiles means that the traditional paradigm of patient selection will need be adapted and should evolve with this change in therapeutic agents. A potential consequence of more clearly defining the patient selection criteria for phase I trials is that the criteria could become too selective. If patients that are entered onto phase I trials are ‘super-selected’ for the best prognostic population, the toxicity that would be seen might not be entirely reflective of the general population. This might mean that the resulting maximum tolerated dose (MTD) and therefore the recommended phase II dose (RP2D) of the trial drugs would be too high and would potentially create more drug toxicity in the phase II and phase III patients. As we have expressed previously, the cascading effects of an increased rate of drug toxicity due to overly aggressive calculation of MTD could impact on drug development and trial costs and could ultimately have a bearing on the success or failure of that drug [57].

4.2. Impact of novel agents — Immunotherapies

Another major advance in the treatment of advanced malignancies has been the development of so-called ‘immune-checkpoint inhibitors’. Modern immunotherapy agents include CTLA-4 inhibitors such as ipilimumab and tremelimumab which block the CTLA-4 molecule which is important for down-regulating T-cell activation, thereby enhancing immune activation. Inhibition of the programmed cell death 1 (PD-1) receptor and its primary ligand (PD-L1) with an ever growing number of drugs such as nivolumab, lambrolizumab and pembrolizumab improves the anti-tumour T-cell immune response in a more specific mode of action than CTLA-4 inhibitors. This class of compounds have provided significant improvements in patient outcomes with gains being made in tumour responses and most importantly in patient survival in a broad range of malignancies such as melanoma, renal cell carcinoma and lung cancer amongst others [58-60].

These immunotherapies, in particular the PD-1 and PD-L1 inhibitors, seem to be largely well tolerated, particularly when compared to chemotherapy, and can induce deep tumour responses that are durable [61]. From the initial trials of these agents there appear to be a significant minority of patients that have very prolonged durations of response, far greater than would otherwise be expected with ‘traditional’ treatments or previous standards of care. For example, patients with metastatic melanoma that were treated with Ipilimumab, a CTLA-4 inhibitor, of the approximately 10% of patients that achieved a response to treatment, many of these continued to have a response more than 12-24 months (median 19.3 months) after treatment was commenced. This is in sharp contradistinction to the patients treated in the ‘standard’ therapy arm with chemotherapy (dacarbazine) where the median duration of response was 8.1 months [62]. It is not entirely clear how the above patient selection such as the RMH and PMH scores would perform for this class of agents and whether they are applicable. Further research is required to determine the applicability of the phase I prognostic scores to patients enrolled onto trials using these immunotherapies.

5. Future of trial design

With the enormous number of novel therapeutic agents being developed and studied in phase I trials, the future for oncology patients seems bright. With the new immunotherapies, including combination therapies, the concept of curing patients with advanced malignancies has even been considered [61]. Adapting trial eligibility criteria and optimising patient selection is vital for the future of a safe and cost-effective drug development process. It is clear that determining the appropriateness of a particular patient for a particular phase I trial is more complex than applying a variety of scores or nomograms. When studying treatments that are personalised for a particular tumour or biomarker or even a mutation it is also important to individualise patient selection. The tendency and temptation is have an easily generalisable patient selection criteria because of its simplicity and reproducibility. The reality is that the behaviour of malignancies differ not only between organ of origin (for example prostate cancer compared to pancreatic cancer) but also within the same cancer type based on its phenotype (for example oestrogen receptor positive breast cancer compared with oestrogen / progesterone / HER-2 negative breast cancer) or genotype (for example BRAF mutated compared to BRAF wild type melanoma).

Given the complexity of patient selection described above, patients should be rationally selected with consideration given to tumour characteristics, patient factors as well as the investigational agents being assessed. Importantly, a degree of flexibility is essential when designing phase I trials for unselected populations to allow for the often extensive inter-patient and inter-tumour variability. The best way forward for optimizing patient selection is to rapidly adapt, in an evidence-based way, to the ever-evolving drug classes being developed as well as the ongoing financial pressures of drug development and not least of all patient expectations and the need for ongoing patient safety.

How to cite and reference

Link to this chapter Copy to clipboard

Cite this chapter Copy to clipboard

Mark Voskoboynik and Hendrik-Tobias Arkenau (June 3rd 2015). Challenges of Patient Selection for Phase I Oncology Trials, Drug Discovery and Development - From Molecules to Medicine, Omboon Vallisuta and Suleiman Olimat, IntechOpen, DOI: 10.5772/59712. Available from:

chapter statistics

4083total chapter downloads

More statistics for editors and authors

Login to your personal dashboard for more detailed statistics on your publications.

Access personal reporting

Related Content

This Book

Next chapter

Advanced Human In vitro Models for the Discovery and Development of Lung Cancer Therapies

By Samuel Constant, Song Huang, Ludovic Wisniewski and Christophe Mas

Related Book

First chapter

Ayurveda the Ancient Science of Healing: An Insight

By Manoj Goyal, D. Sasmal and B.P. Nagori

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.

More About Us