Simulation results of 10,000 replicates for λ = 0, .5, 1, 2, 5 and each prior.
Dose-response models are applied to animal-based cancer risk assessments and human-based clinical trials usually with small samples. For sparse data, we rely on a parametric model for efficiency, but posterior inference can be sensitive to an assumed model. In addition, when we utilize prior information, multiple experts may have different prior knowledge about the parameter of interest. When we make sequential decisions to allocate experimental units in an experiment, an outcome may depend on decision rules, and each decision rule has its own perspective. In this chapter, we address the three practical issues in small-sample dose-response studies: (i) model-sensitivity, (ii) disagreement in prior knowledge and (iii) conflicting perspective in decision rules.
- dose-response models
- consensus prior
- Bayesian decision theory
- individual-level ethics
- population-level ethics
- Bayesian adaptive designs
- sequential decisions
- continual reassessment method
- c-optimal design
- Phase I clinical trials
Dose-response modeling is often used to learn about the effect of an agent on a particular outcome with respect to dose. It is widely applied to animal-based cancer risk assessments and human-based clinical trials. A sample size is typically small; so many statistical issues can arise from a limited amount of data. The issues include the impact of a misspecified model, prior-sensitivity, and conflicting ethical perspectives in clinical trials. In this chapter, we focus on cases when an outcome variable of interest is binary (a predefined event happened or not) when an experimental unit is exposed to a dose. Main ideas are preserved for cases when an outcome variable is continuous or discrete.
There are two different approaches to statistical inference. One approach is called frequentist inference. In this framework, we often rely on the sampling distribution of a statistic and large-sample theories. Another approach is called Bayesian inference. It is founded on Bayes’ Theorem, and it allows researchers to express prior knowledge independent of data. In a small-sample study, Bayesian inference can be more useful than frequentist inference because we can incorporate both researcher’s prior knowledge and observed data to make inference for the parameter of interest. Bayesian ideas are briefly introduced for dose-response modeling with a binary outcome in Section 2.
In a small-sample study, we often rely on a parametric model to gain statistical efficiency (i.e., less variance in parameter estimation), but our inference can be severely biased by the use of a wrong model. To account for model uncertainty, it is reasonable to specify multiple models and make inference based on “averaged-inference.” In this regard, Bayesian model averaging (BMA) is a useful method to gain robustness . The BMA method has a wide range of application, and we focus its application to animal-based cancer risk assessments in Section 3.
In clinical trials, study participants are real patients, and therefore, we need to carefully consider ethics. There are conflicting perspectives of individual- and population-level ethics in early phase clinical trials. Individual-level ethics focuses on the benefit of trial participants, whereas population-level focuses on the benefit of future patients, which may require some level of sacrifice from trial participants. We compare the two conflicting perspectives in clinical trials based on Bayesian decision theory, and we discuss a compromising method in Section 4 [2, 3].
A sample size for an early phase (Phase I) clinical trial is often less than 30 subjects. Dose allocations for first few patients and statistical inference for future patients heavily depend on researcher’s prior knowledge in sparse data. When multiple researchers have different prior knowledge about a parameter of interest, one compromising approach is to combine their prior elicitations and average them (i.e., consensus prior) [4, 5]. When we average the prior elicitations, there are two different approaches to determine the weight of each prior elicitation, weights determined before observing data and after observing data. We discuss operating characteristics of the two different weighting methods in the context of Phase I clinical trials in Section 5.
2. Bayesian inference
In statistics, we address a research question by a parameter, which is often denoted by
The function is called the posterior density function of
Suppose we observe
It is known as the beta distribution with shape parameters
where is the total number of rats developed tumor. By Eq. (2), the posterior density function of
where is the normalizing constant, which makes . We can recognize that .
If the researcher fixed
This example is simplified from Shao and Small . In dose-response studies, we model
which is known as a logistic regression model. It is commonly assumed that a dose-response curve increases with respect to dose, so we assume
To express prior knowledge about , we need to find an appropriate prior density function . It is not simple because it is difficult to express one’s knowledge on the two-dimensional parameters . For mathematical convenience, some practitioners use a flat prior density function . Another way of expressing a lack of prior knowledge about is as follows
with an arbitrarily large value of
where and . By incorporating both prior and data, the posterior density function is as follows
In an animal-based studies, one parameter of interest is the median effective dose, which is denoted by ED50. It is the dose, which satisfies
and it can be shown that by algebra. In the case of
In 1997, International Agency for Research on Cancer classified 2,3,7,8-Tetrachlorodibenzo-p-dioxin (known as TCDD) as a carcinogen for humans based on various empirical evidence . In 1978, Kociba et al. presented the data on male Sprague-Dawley rats at four experimental doses 0, 1, 10 and 100 nanograms per kilogram per day (ng/kg/day) . In the control dose group, nine of 86 rats developed tumor (known as hepatocellular carcinoma); three of 50 rats developed the tumor at dose 1; 18 of 50 rats developed the tumor at dose 10; and 34 of 48 rats developed the tumor at dose 100 . Without loss of generosity, we let
3. Bayesian model averaging
In a small sample, we borrow the strength of a parametric model to gain efficiency in parameter estimation. However, an assumed model may not describe the true dose-response relationship adequately. The impact of model misspecification is not negligible particularly in a poor experimental design. In such a limited practical situation, Bayesian model averaging (BMA) can be a useful method to account for model uncertainty. It is widely applied in practice, and in this section, we focus on the application to cancer risk assessment for the estimation of a benchmark dose [1, 6, 10, 11].
In Eq. (12), the posterior density function depends on model
In Eq. (13), the prior model probability
This example is continued from the example in Section 2.2. Recall
or equivalently . In words,
with the restrictions 0 <
However, we are not able to calculate the 5th percentile of the model-averaged posterior distribution based on the given statistics. In fact, we need to approximate the posterior distribution , which is a mixture of and weighted by and , respectively, as shown in Figure 4. In the figure, the left panel shows an approximation of , the middle panel shows an approximation of , and the right panel shows an approximation of the averaged posterior . The averaged posterior density is bimodal, but it is very close to because the quantal-linear model
4. Application of Bayesian decision theory to Phase I trials
In a Phase I cancer trial, the main objectives are to study the safety of a new chemotherapy and to determine an appropriate dose for future patients. Since trial participants are cancer patients, dose allocations require ethical considerations. Whitehead and Williams discussed several Bayesian approaches to dose allocations . One decision rule is devised from the perspective of trial participants (individual-level ethics), and another decision rule is devised from the perspective of future patients (population-level ethics). However, a decision rule, which is devised from the population-level ethics, is not widely accepted in current practice . Instead, there are some proposed decision rules, which compromise between the individual- and population-level perspectives [3, 16]. In this section, we discuss the two conflicting perspectives in Phase I clinical trials and a compromising method based on Bayesian decision theory.
Assume a dose-response relationship follows a logistic model
A choice of
4.1. Parameter of interest: maximum tolerable dose
At the end of a trial (observing
4.2. Prior density function: conditional mean priors
A consequence of sequential decisions heavily depends on a prior density function . In particular, the first decision
Suppose a researcher selects two arbitrarily doses, say
Using the Jacobian transformation from to , it can be shown that the prior density function of is given by
It is known as conditional mean priors under the logistic model .
4.3. Posterior density function: conjugacy
For notational convenience, we let
where and . After observing
4.4. Loss functions for individual- and population-level ethics
A loss function, which reflects the perspective of individual-level ethics, is as follows:
This loss function is analogous to the original continual reassessment method proposed by O’Quigley et al. . The square error loss attempts to treat a trial participant at
From the perspective of population-level ethics, Whitehead and Brunier proposed a loss function, which is equal to the asymptotic variance of the maximum likelihood estimator for
where . Then, the loss function (the asymptotic variance) is given by
is the gradient vector, the partial derivatives of
with the weight defined as . Eq. (29) has the following important remarks. In fact, considers individual-level ethics by including in the numerator. By including in the denominator, where , the population-level loss function reduces a loss by allocating the next patient further away from the weighted average of previously allocated doses (i.e., devised from information gain). In long run, is devised from a compromise between individual- and population-level ethics, but the compromising process is rather too slow to be implemented in a small-sample Phase I clinical trial .
4.5. Loss function for compromising the two perspectives
Kim and Gillen proposed to accelerate the compromising process by modifying of Eq. (29) as follows
is an accelerating factor . It has two implications. First, the compromising process is accelerated toward the individual-level ethics as the trial proceeds (i.e.,
To study the operating characteristics of
Let , the posterior estimate of
Table 1 summarizes simulation results of 10,000 replicates for each prior. For all three priors, we observe similar tendencies. First, gets closer to
In summary, when we emphasize more on population-level ethics, we have a smaller variance in the estimation for future patients (with a greater absolute bias, potentially due to Jensen’s Inequality), and the distribution of becomes more robust to prior elicitations. When we emphasize more on individual-level ethics, we have a larger variance in the estimation, and the distribution of becomes more sensitive to prior elicitations.
5. Consensus prior
In Bayesian inference, researchers are able to utilize information, which is independent of observed data. It allows researchers to incorporate any form of information, such as one’s experience and existing literature, which may be particularly useful in a small-sample study. On the other hand, we concern subjectivity and prior sensitivity in sparse data. Furthermore, it is possible to have disagreement among multiple researchers’ prior elicitations about a parameter
Suppose there are
where is the posterior density function of
For a prior weighting scheme, we denote which quantifies the credibility of the
where is the marginal likelihood from the
Samaniego discussed self-consistency when compromised inference is used through the prior weighting scheme . Let
be the prior expectation, the mean of the prior density function . Let denote a sufficient statistic, which serves as an unbiased estimator for
Self-consistency can be achieved under simple models. For example, let be a random sample, where , and assume for prior. It can be shown that the maximum likelihood estimator is a sufficient statistic and an unbiased estimator for
where . If we observe , we can achieve the self-consistency because . In words, when prior estimate and maximum likelihood estimate are identical, the posterior estimate must be consistent with the prior estimate and the maximum likelihood estimate. The self-consistency can be also achieved in the prior weighting scheme under certain conditions as illustrated in the following example.
5.1. Binomial experiment
Let for and assume are independent. Suppose the
Let and suppose the
If we allow individual-specific prior elicitation
so the self-consistency is satisfied.
For the posterior weighting scheme given data , the marginal likelihood from the
where is an observed sufficient statistic. Then, the posterior weighting scheme becomes with
If we desire an equal strength from each researcher’s prior elicitation, we may fix and . In the posterior weighting scheme, it is difficult to achieve the self-consistency.
Whether self-consistency is satisfied, the practical concern is the quality of estimation such as bias, variance and mean square error. Assuming
5.2. Applications to Phase I trials under logistic regression model
In this section, we apply the prior weighting scheme and the posterior weighting scheme to Phase I clinical trials under the logistic regression model. We consider the three priors considered in Section 4.6. We denote Prior 1, 2 and 3 by
The prior means were , and for Priors 1, 2 and 3, respectively.
For simulation study, we consider three simulation scenarios with sample size
Table 2 provides the simulation results of 10,000 replicates for each scenario under the prior weighting scheme and under the posterior weighting scheme. Since the posterior weighting scheme adaptively updates based on empirical evidence, it can reduce bias, but it has greater variance in the estimation of
The simulation results are analogous to the simpler model in Section 5.1. When the true parameter is not well surrounded by prior guesses, the posterior weighting scheme is preferable with respect to mean square error due to smaller bias. When the true parameter is well surrounded by prior guesses, the prior weighting scheme is beneficial with respect to mean square error due to smaller variance.
As a final comment, we shall be careful about the strength of individual prior elicitations when we implement the posterior weighting scheme in Phase I clinical trials. The strength of individual prior elicitations depends on (i) the hyper-parameters and , (ii) the prior weight as well as (iii) the distance between the two arbitrarily chosen doses . It can be seen through the expression
When researchers determine consensus prior elicitations before initiating a trial, the multiplicative term shall be carefully considered together with the hyper-parameters and .
6. Concluding remarks
In this chapter, we have discussed Bayesian inference with averaging, balancing, and compromising in sparse data. In the cancer risk assessment, we have observed that low-dose inference can be very sensitive to an assumed parametric model (Section 3.1). In this case, the Bayesian model averaging can be a useful method. It provides robustness by using multiple models and posterior model probabilities to account for model uncertainty. In the application of Bayesian decision theory to Phase I clinical trials, we have observed that the sequential sampling scheme heavily depends on a loss function. A loss function, which is devised from individual-level ethics, focuses on the benefit of trial participants, and a loss function, which is devised from population-level ethics, focuses on the benefit of future patients. It is possible to balance between the two conflicting perspectives, and we can adjust a focusing point by the tuning parameter (Sections 4.5 and 4.6). Finally, the use of a weighted posterior estimate can be a compromising method when two or more researchers have prior disagreement. We have compared the prior and posterior weighting schemes in a small-sample binomial problem (Section 5.1) and in a small-sample Phase I clinical trial (Section 5.2). The prior weighting scheme (data-independent weights) outperforms when prior estimates surround the truth, and the posterior weighting scheme (data-dependent weights) outperforms when the truth is not well surrounded by prior estimates. One method does not outperform the other method for all parameter values, so it is important to be aware of their bias-variance tradeoff.