Open access peer-reviewed chapter

# Bayesian Model Averaging and Compromising in Dose-Response Studies

By Steven B. Kim

Submitted: December 6th 2016Reviewed: March 27th 2017Published: November 2nd 2017

DOI: 10.5772/intechopen.68786

## Abstract

Dose-response models are applied to animal-based cancer risk assessments and human-based clinical trials usually with small samples. For sparse data, we rely on a parametric model for efficiency, but posterior inference can be sensitive to an assumed model. In addition, when we utilize prior information, multiple experts may have different prior knowledge about the parameter of interest. When we make sequential decisions to allocate experimental units in an experiment, an outcome may depend on decision rules, and each decision rule has its own perspective. In this chapter, we address the three practical issues in small-sample dose-response studies: (i) model-sensitivity, (ii) disagreement in prior knowledge and (iii) conflicting perspective in decision rules.

### Keywords

• dose-response models
• model-sensitivity
• model-averaging
• prior-sensitivity
• consensus prior
• Bayesian decision theory
• individual-level ethics
• population-level ethics
• sequential decisions
• continual reassessment method
• c-optimal design
• Phase I clinical trials

## 1. Introduction

Dose-response modeling is often used to learn about the effect of an agent on a particular outcome with respect to dose. It is widely applied to animal-based cancer risk assessments and human-based clinical trials. A sample size is typically small; so many statistical issues can arise from a limited amount of data. The issues include the impact of a misspecified model, prior-sensitivity, and conflicting ethical perspectives in clinical trials. In this chapter, we focus on cases when an outcome variable of interest is binary (a predefined event happened or not) when an experimental unit is exposed to a dose. Main ideas are preserved for cases when an outcome variable is continuous or discrete.

There are two different approaches to statistical inference. One approach is called frequentist inference. In this framework, we often rely on the sampling distribution of a statistic and large-sample theories. Another approach is called Bayesian inference. It is founded on Bayes’ Theorem, and it allows researchers to express prior knowledge independent of data. In a small-sample study, Bayesian inference can be more useful than frequentist inference because we can incorporate both researcher’s prior knowledge and observed data to make inference for the parameter of interest. Bayesian ideas are briefly introduced for dose-response modeling with a binary outcome in Section 2.

In a small-sample study, we often rely on a parametric model to gain statistical efficiency (i.e., less variance in parameter estimation), but our inference can be severely biased by the use of a wrong model. To account for model uncertainty, it is reasonable to specify multiple models and make inference based on “averaged-inference.” In this regard, Bayesian model averaging (BMA) is a useful method to gain robustness [1]. The BMA method has a wide range of application, and we focus its application to animal-based cancer risk assessments in Section 3.

In clinical trials, study participants are real patients, and therefore, we need to carefully consider ethics. There are conflicting perspectives of individual- and population-level ethics in early phase clinical trials. Individual-level ethics focuses on the benefit of trial participants, whereas population-level focuses on the benefit of future patients, which may require some level of sacrifice from trial participants. We compare the two conflicting perspectives in clinical trials based on Bayesian decision theory, and we discuss a compromising method in Section 4 [2, 3].

A sample size for an early phase (Phase I) clinical trial is often less than 30 subjects. Dose allocations for first few patients and statistical inference for future patients heavily depend on researcher’s prior knowledge in sparse data. When multiple researchers have different prior knowledge about a parameter of interest, one compromising approach is to combine their prior elicitations and average them (i.e., consensus prior) [4, 5]. When we average the prior elicitations, there are two different approaches to determine the weight of each prior elicitation, weights determined before observing data and after observing data. We discuss operating characteristics of the two different weighting methods in the context of Phase I clinical trials in Section 5.

## 2. Bayesian inference

In statistics, we address a research question by a parameter, which is often denoted by θ. We begin Bayesian inference by modeling the prior knowledge about θ. A function, which models the prior knowledge about θ, is called the prior density function of θ, and we denote it by f(θ). It is a non-negative function, which satisfies Ωf(θ)dθ=1, where Ω is the set of all possible values of θ(i.e., parameter space). We then model data y=(y1,,yn)given θ. The likelihood function, denoted by f(y|θ), quantifies the likelihood of observing a particular sample y=(y1,,yn)under an assumed probability model. By Bayes’ Theorem, we update our knowledge about θafter observing data yas

f(θ|y)=f(y|θ)f(θ)f(y).E1

The function f(θ|y)is called the posterior density function of θgiven data y. Since we treat observed data y=(y1,,yn)as fixed numbers, we often express Eq. (1) as follows

f(θ|y)f(y|θ)f(θ)=kf(y|θ)f(θ),E2

where kis the normalizing constant which makes Ωf(θ|y)dθ=1. We can often realize f(θ|y)based on the prior density function f(θ) and the likelihood function f(y|θ)without considering the denominator f(y)=f(y|θ)f(θ)dθin Eq. (1) which is called the marginal likelihood.

### 2.1. Example

Suppose we observe n= 20 rats for 2 years. Let πbe the parameter of interest, which is interpreted as the probability of developing some type of tumor. Suppose a researcher models the prior knowledge about πusing the prior density function

f(π)=Γ(a+b)Γ(a)Γ(b)πa1(1π)b1,0<π<1.E3

It is known as the beta distribution with shape parameters a> 0 and b> 0. We often denote the beta distribution by πBeta(a,b), and the values of aand bmust be specified by the researcher independent from data. Let y=(y1,,yn)denote observed data, where yi= 1 if the ith rat developed tumor and yi= 0 otherwise. Assuming y1,…, ynare independent observations, the likelihood function is as follows

f(y|π)=i=1nπyi(1π)1yi=πs(1π)ns,E4

where s=i=1nyiis the total number of rats developed tumor. By Eq. (2), the posterior density function of πis as follows

f(π|y)=kπa+s1(1π)b+ns1,E5

where k=Γ(a+b+n)Γ(a+s)Γ(b+ns)is the normalizing constant, which makes 01f(π|y)dπ=1. We can recognize that π|yBeta(a+s,b+ns).

If the researcher fixed a= 2 and b=3 and observed s= 9 from a sample of size n= 20, the prior density function is f(π)=kπ(1π)2with k=Γ(5)Γ(2)Γ(3)=12, and the posterior density function is f(π|y)=kπ10(1π)13with k=Γ(25)Γ(11)Γ(14)=27457584. The prior and posterior distributions are shown in Figure 1. The knowledge about πbecomes more certain (less variance) after observing the data.

### 2.2. Example

This example is simplified from Shao and Small [6]. In dose-response studies, we model πas a function of dose x. There are many link functions between πand xused in practice. In this example, we focus on a link function

πx=eβ0+β1x1+eβ0+β1xE6

which is known as a logistic regression model. It is commonly assumed that a dose-response curve increases with respect to dose, so we assume β1 > 0 (and β0 can be any real number). There are two regression parameters in Eq. (6), β0 and β1, and we denote them as β=(β0,β1). Figure 2 presents two dose-response curves. The solid curve is generated by β=(1,2), and the dotted curve is generated by β=(2,5). As β0 increases, the background risk π0=eβ01+eβ0increases, where π0 is interpreted as the probability of tumor development at dose x= 0. The dose-response curve increases when β1 > 0, and it decreases when β1 < 0. The rate of change in the dose-response curve is determined by|β1|.

To express prior knowledge about β, we need to find an appropriate prior density function f(β). It is not simple because it is difficult to express one’s knowledge on the two-dimensional parameters β=(β0,β1). For mathematical convenience, some practitioners use a flat prior density function f(β)1. Another way of expressing a lack of prior knowledge about βis as follows

f(β)12πσ2eβ02+β122σ2Iβ1>0E7

with an arbitrarily large value of σ[6]. When a reliable source of prior information is available, there is a practical method, which is known as the conditional mean prior [7], and it will be discussed in a later section (see Section 4.2). In an experiment, the experimental doses x=(x1,,xn)are fixed, and we observe random binary outcomes y=(y1,,yn). Given y(and fixed x), the likelihood function is as follows

f(y|β)=i=1n(eβ0+β1xi1+eβ0+β1xi)yi(11+eβ0+β1xi)1yi=eβ0s1+β1s2i=1n(1+eβ0+β1xi),E8

where s1=i=1nyiand s2=i=1nxiyi. By incorporating both prior and data, the posterior density function is as follows

f(β|y)f(β)eβ0s1+β1s2i=1n(1+eβ0+β1xi).E9

In an animal-based studies, one parameter of interest is the median effective dose, which is denoted by ED50. It is the dose, which satisfies

πED50=eβ0+β1ED501+eβ0+β1ED50=.5,E10

and it can be shown that ED50=β0β1by algebra. In the case of β0 = −2 and β1 = 5, we have ED50 = .4 as describe in the figure with the dotted curve. In the case of β0 = −1 and β2 = 2, we have ED50 = .5 as described in the figure with the solid curve.

In 1997, International Agency for Research on Cancer classified 2,3,7,8-Tetrachlorodibenzo-p-dioxin (known as TCDD) as a carcinogen for humans based on various empirical evidence [8]. In 1978, Kociba et al. presented the data on male Sprague-Dawley rats at four experimental doses 0, 1, 10 and 100 nanograms per kilogram per day (ng/kg/day) [9]. In the control dose group, nine of 86 rats developed tumor (known as hepatocellular carcinoma); three of 50 rats developed the tumor at dose 1; 18 of 50 rats developed the tumor at dose 10; and 34 of 48 rats developed the tumor at dose 100 [6]. Without loss of generosity, we let xi= 0 for i= 1,…, 86; xi= 1 for i= 87,…136; xi= 10 for i= 137,…, 186; and xi= 100 for i= 187,…, 234. The given information is sufficient to calculate s1=i=1nyi=64and s2=i=1nxiyi=3583. By the use of the flat prior f(β)1with the restriction β1 > 0, given the observed sample of size n= 234, we can generate random numbers of β=(β0,β1)from the posterior density function

f(β|y)eβ0s1+β1s2i=1n(1+eβ0+β1xi)Iβ1>0,E11

where Iβ1>0=1if β1 > 0 and Iβ1>0=0otherwise. Using a method of Markov Chain Monte Carlo (MCMC), we can approximate the posterior distribution of βas shown in the left panel of Figure 3. By transforming (β0, β1) to ED50=β0β1, we can approximate the posterior distribution of the median effective dose ED50 as shown in the right panel. The posterior mean of ED50 is E(ED50|y)=64.9with 95% credible interval (50.8, 82.5), the 2.5th percentile and the 97.5th percentile of the posterior distribution.

## 3. Bayesian model averaging

In a small sample, we borrow the strength of a parametric model to gain efficiency in parameter estimation. However, an assumed model may not describe the true dose-response relationship adequately. The impact of model misspecification is not negligible particularly in a poor experimental design. In such a limited practical situation, Bayesian model averaging (BMA) can be a useful method to account for model uncertainty. It is widely applied in practice, and in this section, we focus on the application to cancer risk assessment for the estimation of a benchmark dose [1, 6, 10, 11].

Let θdenote a parameter of interest. Suppose we have a set of Kcandidate models denoted by M={M1,,MK}. Let βkdenote the vector of regression parameters under model Mkfor k=1,…, K. Suppose θis a function of βk, and the interpretation of θmust be common across all models. Let f(βk|Mk)and f(y|βk,Mk)denote the prior density function and the likelihood function, respectively, under Mk. By the Law of Total Probability, the posterior density function of θis as follows

f(θ|y)=k=1Kf(θ|Mk,y)P(Mk|y).E12

In Eq. (12), the posterior density function f(θ|Mk,y)depends on model Mk, and the posterior model probability P(Mk|y)quantifies the plausibility of model Mkafter observing data, which is given by

P(Mk|y)=f(y|Mk)P(Mk)j=1Kf(y|Mj)P(Mj).E13

In Eq. (13), the prior model probability P(Mk) is determined before observing data such that P(Mk)>0for k=1,,Kand k=1KP(Mk)=1. The marginal likelihood under Mkrequires the integration

f(y|Mk)=f(y|βk,Mk)f(βk|Mk)dβk.E14

In the BMA method, all Kmodels contribute to inference of θthrough the averaged posterior density function in Eq. (12), and the weight of contribution is determined by Bayes’ Theorem in Eq. (13).

### 3.1. Example

This example is continued from the example in Section 2.2. Recall πxis interpreted as the probability of a toxic event (tumor development) at dose x. In many cancer risk assessments, a parameter of interest is θγat a fixed risk level γ, which is defined as follows

γ=πθγπ01π0E15

or equivalently πθγ=π0+(1π0)γ. In words, θγis a dose corresponding to a fixed increase in the risk level. In frequentist framework, Crump defined a benchmark dose as a lower confidence limit for θγ[12]. In Bayesian framework, an analogous definition would be a lower credible bound (i.e., a fixed low percentile of the posterior distribution of θγ). The definition is widely applied to the public health protection [13].

In practice, γis fixed between 0.01 and 0.1. Often, the estimation of θγis highly sensitive to an assumed dose-response model because we have a lack of information at low doses. Shao and Small fixed γ= 0.1 and applied BMA with K= 2 models, logistic model and quantal-linear model [6]. In the quantal-linear model, the probability of tumor development is modeled by

πx=β0+(1β0)(1eβ1x).E16

with the restrictions 0 < β0 < 1 and β1 > 0 under the monotonic assumption. The logistic model was given in Eq. (6) of Section 2.2.

Let M1 denote the logistic model, and let M2 denote the quantal-linear model. Assume the uniform prior model probabilities P(M1)=P(M2)=.5and flat priors on the regression parameters. By posterior sampling, we can approximate the posterior model probabilities P(M1|y)=.049and P(M2|y)=.951. Under M1, the posterior mean of θ0.1 is 20.95 with the 5th percentile 16.74. Under M2, the posterior mean is 8.25 with the 5th percentile 5.95. These results are very similar to the results reported by Shao and Small [6]. From these model-specific statistics, we can calculate the model-averaged posterior mean

E(θ0.1|y)=k=12E(θ0.1|Mk,y)P(Mk|y)=20.95(.049)+8.25(.951)=8.87.E17

However, we are not able to calculate the 5th percentile of the model-averaged posterior distribution based on the given statistics. In fact, we need to approximate the posterior distribution f(θ0.1|y), which is a mixture of f(θ0.1|M1,y)and f(θ0.1|M2,y)weighted by P(M1|y)=.049and P(M2|y)=.951, respectively, as shown in Figure 4. In the figure, the left panel shows an approximation of f(θ0.1|M1,y), the middle panel shows an approximation of f(θ0.1|M2,y), and the right panel shows an approximation of the averaged posterior f(θ0.1|y). The averaged posterior density f(θ0.1|y)is bimodal, but it is very close to f(θ0.1|M2,y)because the quantal-linear model M2 fits the data better than the logistic model M1 by a Bayes factor of P(M2|y)P(M1|y)=.951.049=19.4. The 5th percentile of the model-averaged posterior distribution is approximately 5.97, and it is a BMA-BMD based on the BMA method proposed by Raftery et al. [1] and the BMD estimation method suggested by Crump [12].

## 4. Application of Bayesian decision theory to Phase I trials

In a Phase I cancer trial, the main objectives are to study the safety of a new chemotherapy and to determine an appropriate dose for future patients. Since trial participants are cancer patients, dose allocations require ethical considerations. Whitehead and Williams discussed several Bayesian approaches to dose allocations [14]. One decision rule is devised from the perspective of trial participants (individual-level ethics), and another decision rule is devised from the perspective of future patients (population-level ethics). However, a decision rule, which is devised from the population-level ethics, is not widely accepted in current practice [15]. Instead, there are some proposed decision rules, which compromise between the individual- and population-level perspectives [3, 16]. In this section, we discuss the two conflicting perspectives in Phase I clinical trials and a compromising method based on Bayesian decision theory.

Assume a dose-response relationship follows a logistic model

πx=eβ0+β1x1+eβ0+β1x,E18

where xis a dose in the logarithmic scale (base e) and πxis the probability of observing an adverse event due to the toxicity of a new chemotherapy at dose x. The logarithmic transformation on the dose is to satisfy πx→ 0 as x0. Let xn=(x1,,xn)denote a series of decisions for npatients (i.e., allocated doses) and yn=(y1,,yn)denote a series of observed responses, where yi= 1 indicates an adverse event and yi= 0 otherwise. Let L(β,xn+1)denote a loss by allocating the next patient at xn+1. Based on Bayesian Decision Theory, we want to find xn+1 which minimizes the posterior mean of L(β,xn+1). If we let Adenote an action space, a set of all possible dose allocations for the next patients, the decision rule can be written as follows:

xn+1*=argminxn+1AE(L(β,xn+1)|yn).E19

A choice of Lhas a substantial impact on the operating characteristics of a Phase I trial including (i) the degree of under- and over-dosing in trial, (ii) the observed number of adverse events at the end of a trial, and (iii) the quality of estimation at the end of a trial.

### 4.1. Parameter of interest: maximum tolerable dose

Let Ndenote an available sample size for a Phase I clinical trial. A typical sample size is N≤ 30. Let γdenote a target risk level, the probability of an adverse event. In a cancer study, a typical target risk level γis fixed between .15 and .35 depending on the severity of an adverse event. Then, the dose corresponding to γis called a maximum tolerable dose (MTD) at level γ, and we denote it by θγin the logarithmic scale. Under the logistic model in Eq. (18), it is defined as follows

θγ=log(γ1γ)β0β1.E20

At the end of a trial (observing Nresponses), we estimate θγby the posterior mean θ^γ,N=E(θγ|yN)for future patients.

### 4.2. Prior density function: conditional mean priors

A consequence of sequential decisions heavily depends on a prior density function f(β). In particular, the first decision x1 must be made based on prior knowledge only because empirical evidence is not observed yet. In addition, the later decisions x2, x3,… and the final inference of θγare substantially affected by f(β)as a Phase I study is typically based on a small sample. In this regard, we want to carefully utilize researchers’ prior knowledge about β, but it may be difficult to express their prior knowledge directly through f(β). In this section, we discuss a method of eliciting prior knowledge, which is more tractable than prior elicitation directly on β.

Suppose a researcher selects two arbitrarily doses, say x−1 < x0. Then, the researcher may express their prior knowledge by two independent beta distributions

πxi=eβ0+β1xi1+eβ0+β1xiBeta(ai,bi),j=1,0.E21

Using the Jacobian transformation from (πx1,πx0)to β=(β0,β1), it can be shown that the prior density function of βis given by

f(β)(x0x1)i=10(eβ0+β1xi1+eβ0+β1xi)ai(11+eβ0+β1xi)bi.E22

It is known as conditional mean priors under the logistic model [7].

### 4.3. Posterior density function: conjugacy

For notational convenience, we let yi= aiand ni= ai+ bifor i= −1,0. By conjugacy, the posterior density function of βcan be concisely written as follows

f(β|yn)eβ0s1+β1s2i=1n(1+eβ0+β1xi),E23

where s1=i=1nyiand s2=i=1nxiyi. After observing nresponses, the decision rule for the next patient is as follows

xn+1*=argminxn+1AL(β,xn+1)f(β|yn)dβ.E24

### 4.4. Loss functions for individual- and population-level ethics

A loss function, which reflects the perspective of individual-level ethics, is as follows:

LI(β,xn+1)=(xn+1θγ)2.E25

This loss function is analogous to the original continual reassessment method proposed by O’Quigley et al. [17]. The square error loss attempts to treat a trial participant at θγ, and the expected square error loss is minimized by the posterior mean of θγ.

From the perspective of population-level ethics, Whitehead and Brunier proposed a loss function, which is equal to the asymptotic variance of the maximum likelihood estimator for θγ[18]. The Fisher expected information matrix with a sample of size n+ 1 is given by

I(β)=(i=1n+1τii=1n+1τixii=1n+1τixii=1n+1τixi2),E26

where τi=πxi(1πxi). Then, the loss function (the asymptotic variance) is given by

LP(β,xn+1)=[h(β)]T[I(β)]1[h(β)],E27

where

h(β)=(θγβ0θγβ1)=1β1(1θγ)E28

is the gradient vector, the partial derivatives of θγwith respect to β0 and β1. Kim and Gillen decomposed the population-level loss function as follows

LP(β,xn+1)=τn+1(xn+1θγ)2+sn(0)[(θγμn)2+σn2][sn(0)sn(2)sn(1)sn(1)]+sn(0)τn+1[(xn+1μn)2+σn2],E29

where

sn(m)=i=1nτixim,m=0,1,2,μn=i=1nwixi,σn2=i=1nwixi2(i=1nwixi)2E30

with the weight defined as wi=τii=1nτj[3]. Eq. (29) has the following important remarks. In fact, LP(β,xn+1)considers individual-level ethics by including LI(β,xn+1)=(xn+1θγ)2in the numerator. By including (xn+1μn)2in the denominator, where μn=i=1nwixi, the population-level loss function reduces a loss by allocating the next patient further away from the weighted average of previously allocated doses (i.e., devised from information gain). In long run, LP(β,xn+1)is devised from a compromise between individual- and population-level ethics, but the compromising process is rather too slow to be implemented in a small-sample Phase I clinical trial [3].

### 4.5. Loss function for compromising the two perspectives

Kim and Gillen proposed to accelerate the compromising process by modifying LP(β,xn+1)of Eq. (29) as follows

LB,λ(β,xn+1)=an(λ)τn+1(xn+1θγ)2+sn(0)[(θγμn)2+σn2][sn(0)sn(2)sn(1)]+sn(0)τn+1[(xn+1μn)2+σn2],E31

where

an(λ)=(1+nN)λ(1+i=1nyiNγ)E32

is an accelerating factor [3]. It has two implications. First, the compromising process is accelerated toward the individual-level ethics as the trial proceeds (i.e., nincreases). Second, the compromising process toward the individual-level ethics is accelerated at a faster rate when an adverse event is observed (i.e., i=1nyiincreases). The tuning parameter λcontrols the rate of acceleration. It imposes more emphasis on population-level ethics as λ→ 0 and more emphasis on individual-level ethics as λ →∞. The choice of λshall depend on the severity level of an adverse event.

### 4.6. Simulation

To study the operating characteristics of LB,λwith respect to λ, we assume the logistic model with β0 = −3 and β1 = .8 as a true dose-response relationship as shown in Figure 5 in the left panel. The target risk level is fixed at γ= .2, so the true MTD is given by θ.2 = 2.02 in the logarithmic scale. We consider three different priors based on the conditional mean priors given in Eq. (22). For simplicity, we set a−1 = 1, b−1 = 3, a0 = 3 and b0 = 1 for all three priors. Then, we let x1=4and x0=4for Prior 1; x1=0and x0=8for Prior 2; and x1=4and x0=12for Prior 3. Figure 5 in the right panel shows an approximated f(θ.2) for each prior. Prior 1 significantly underestimates the true θ.2=2.02with prior mean E(θ.2)=1.70, Prior 3 overestimates the truth with E(θ.2)=5.38, and Prior 2 has a prior estimate relatively close to the truth with E(θ.2)=1.40.

Let N= 20 be a fixed sample size. Let Yi= 1 denote an adverse event observed from the ith patient (Yi= 0 otherwise), so i=1NYidenotes the total number of adverse events observed at the end of a trial. The sum i=1nYiis random from a trial to another trial, and we want i=1nYito behave like Binomial(20,.2)which is the case when we treat N= 20 to the true MTD θ.2. Figure 6 shows three simulated trials under the loss function LB,λwith λ= 0,1,5. When λ= 0, the up-and-down scheme has a high degree of fluctuation in order to maximize information about θ.2. When λ= 1, the up-and-down scheme is stabilized after the first few adverse events, and the stabilization occurs quickly when λ= 5 to treat trial participants near an estimated θ.2.

Let θ^.2=E(θ.2|yN), the posterior estimate of θ.2 at the end of a trial, so πθ^.2implies the true probability of an adverse event at the estimated MTD. We focus on the following criteria: (i) E(πθ^.2)which we desire to be close to γ=.2for future patients, (ii) V(πθ^.2)which we desire to be as low as possible for future patients, (iii) E[(πθ^.2.2)2]which we desire to be as low as possible for future patients, (iv) E(i=120Yi)which we desire to be close to Nγ=4for trial participants and (v) P(3i=120Yi5)which we desire to be close to one for trial participants.

Table 1 summarizes simulation results of 10,000 replicates for each prior. For all three priors, we observe similar tendencies. First, E(πθ^.2)gets closer to θ= .2 as λincreases. Second, V(πθ^.2)decreases as λdecreases to zero. The average square distance between πθ^.2and γ=.2measures a balance between |E(πθ^.2).2|and V(πθ^.2), and the superiority depends on priors. Lastly, as λ→ 0, we have larger P(3i=120Yi5)and more robust E(i=120Yi)to prior elicitation.

PriorλE(πθ^.2)V(πθ^.2)E[(πθ^.2.2)2]E(i=120Yi)P(3i=120Yi5)
100.09640.00190.01262.43530.4318
.50.10340.00240.01182.09970.2298
10.10820.00280.01131.89690.1714
20.11000.00310.01121.69290.1211
50.11570.00350.01061.31280.0596
200.16650.00540.00654.12170.9889
.50.17050.00560.00653.95980.9877
10.17270.00600.00683.90250.9670
20.17510.00660.00723.87070.9291
50.17630.00670.00733.84420.9068
300.27430.00480.01036.18750.1600
.50.26730.00480.00936.39540.1430
10.26060.00460.00836.61940.1165
20.25620.00450.00776.80350.1020
50.24990.00440.00687.02740.0760

### Table 1.

Simulation results of 10,000 replicates for λ = 0, .5, 1, 2, 5 and each prior.

In summary, when we emphasize more on population-level ethics, we have a smaller variance in the estimation for future patients (with a greater absolute bias, potentially due to Jensen’s Inequality), and the distribution of i=1nYibecomes more robust to prior elicitations. When we emphasize more on individual-level ethics, we have a larger variance in the estimation, and the distribution of i=1nYibecomes more sensitive to prior elicitations.

## 5. Consensus prior

In Bayesian inference, researchers are able to utilize information, which is independent of observed data. It allows researchers to incorporate any form of information, such as one’s experience and existing literature, which may be particularly useful in a small-sample study. On the other hand, we concern subjectivity and prior sensitivity in sparse data. Furthermore, it is possible to have disagreement among multiple researchers’ prior elicitations about a parameter θ.

Suppose there are Kresearchers with their own prior density functions, say f(θ|Qk)for k=1,,K, and they have the same likelihood function f(y|θ). Each prior elicitation leads to a unique Bayes estimator

θ^k=E(θ|y,Qk)=θf(θ|y,Qk)dθ,E33

where f(θ|y,Qk)f(y|θ)f(θ|Qk)is the posterior density function of θgiven data yand the kth prior elicitation Qk. For posterior estimation, one reasonable approach to compromise is a weighted average k=1Kwkθ^k, where wk> 0 for k= 1,…,Kand k=1Kwk=1. In this section, we discuss two different weighting methods. The first method is to fix wkbefore observing data (referred to as prior weighting scheme). The second method is to determine wk(y)after observing data yso that wk(y)increases when the kth prior elicitation Qkis better supported by the observed data y(referred to as posterior weighting scheme) [5].

For a prior weighting scheme, we denote wk=P(Qk)which quantifies the credibility of the kth prior elicitation. For a posterior weighting scheme, we consider

wk(y)=P(Qk|y)=f(y|Qk)P(Qk)j=1Kf(y|Qj)P(Qj)=wkf(y|Qk)j=1Kwjf(y|Qj),E34

where f(y|Qk)=f(y|θ)f(θ|Qk)dθis the marginal likelihood from the kth prior elicitation. This formulation is similar to the BMA method discussed in Section 3. It can be shown that k=1Kwk(y)θ^kis the Bayes estimator (the posterior mean of θ) when a consensus prior f(θ)=k=1Kwkf(θ|Qk)is used with wk=P(Qk)[5].

Samaniego discussed self-consistency when compromised inference is used through the prior weighting scheme k=1Kwkθ^k[4]. Let θdenote a parameter of interest and

E(θ)=θf(θ)dθ=θ*E35

be the prior expectation, the mean of the prior density function f(θ). Let θ˜denote a sufficient statistic, which serves as an unbiased estimator for θ. When we satisfy E(θ|θ˜=θ*)=θ*, it is called self-consistency [4].

Self-consistency can be achieved under simple models. For example, let Y=(Y1,,Yn)be a random sample, where YiBernoulli(θ), and assume θBeta(a,b)for prior. It can be shown that the maximum likelihood estimator θ˜=1ni=1nYiis a sufficient statistic and an unbiased estimator for θ. The posterior mean is a weighted average between θ*and θ˜as follows

E(θ|θ˜=θ*)=cθ*+(1c)θ˜,E36

where c=a+ba+b+n. If we observe θ˜=θ*, we can achieve the self-consistency because E(θ|θ^=θ*)=θ*. In words, when prior estimate and maximum likelihood estimate are identical, the posterior estimate must be consistent with the prior estimate and the maximum likelihood estimate. The self-consistency can be also achieved in the prior weighting scheme under certain conditions as illustrated in the following example.

### 5.1. Binomial experiment

Let YiBernoulli(π)for i=1,,nand assume Y1,,Ynare independent. Suppose the kth researcher specifies the prior distribution π|QkBeta(ak,bk)for k=1,,K. For the prior weighting scheme, let wk=P(Qk), the prior probability for the kth prior elicitation (fixed before observing data). Since E(π|Qk)=akak+bkand the expectation E(⋅) is a linear operator, the average of “consensus prior” is

E(π)=01πf(π)dπ=01π(k=1Kf(π|Qk)P(Qk))dπ=k=1Kwk(01πf(π|Qk)dπ)=k=1KwkE(π|Qk).E37

Let E(π)=π*and suppose the Kresearchers observed the consistent result π˜=1ni=1nYi=π*. The individual-specific Bayes estimator is as follows

π^k=E(π|π˜=π*,Qk)=ckE(π|Qk)+(1ck)π*,E38

for the kth researcher, where ck=ak+bkak+bk+n. The compromised Bayes estimator is as follows

E(π|π˜=π*)=k=1Kwkπ^k=k=1Kwk[ckE(π|Qk)+(1ck)π*].E39

If we allow individual-specific prior elicitation akand bkwith the restriction ak+ bk= mfor all Kresearchers (i.e., the same strength of prior elicitation), the value ck=mm+nis constant over all researcher. By letting the constant ck= c,

E(π|π˜=π*)=c(k=1KwkE(π|Qk))+(1c)π*(k=1Kwk)=cE(π)+(1c)π*=π*,E40

so the self-consistency is satisfied.

For the posterior weighting scheme given data y=(y1,,yn), the marginal likelihood from the kth prior elicitation is as follows

f(y|Qk)=01f(y|π)f(π|Qk)dπ=Γ(ak+bk)Γ(ak)Γ(bk)Γ(ak+s)Γ(bk+ns)Γ(ak+bk+n),E41

where s=i=1nyiis an observed sufficient statistic. Then, the posterior weighting scheme becomes k=1Kwk(y)π^kwith

wk(y)=wkf(y|Qk)j=1Kwjf(y|Qj),π^k=ak+sak+bk+n.E42

If we desire an equal strength from each researcher’s prior elicitation, we may fix ak+bk=mand wk=1K. In the posterior weighting scheme, it is difficult to achieve the self-consistency.

Whether self-consistency is satisfied, the practical concern is the quality of estimation such as bias, variance and mean square error. Assuming K= 2 researchers have disagreeing prior knowledge and a sample of size n= 10, let us consider three cases. Suppose two researchers express relatively mild disagreement as (a1,b1)=(1,3)and (a2,b2)=(3,1)in Case 1, relatively strong disagreement as (a1,b1)=(2,6)and (a2,b2)=(6,2)in Case 2, and even stronger disagreement as (a1,b1)=(3,9)and (a2,b2)=(9,3)in Case 3. For each case, Figure 7 provides the relative bias, variance and mean square error (MSE) for comparing the posterior weighting scheme k=13wk(y)π^kto the prior weighting scheme k=13wkπ^k. When a relative MSE is smaller than one, it implies a smaller MSE for the posterior weighting scheme. As the true value of πis well between the two prior guesses E(π|Q1)=.25and E(π|Q2)=.75, the posterior weighting scheme shows a greater MSE due to greater variance. When the true value of πdeviates away from either prior guess, the posterior weighting schemes show a smaller MSE due to smaller bias. The tendency is stronger when the two disagreeing prior elicitations are stronger (i.e., stronger prior disagreement). The bottom line is a clear bias-variance tradeoff when we compare the two weighting schemes. k=13wk(y)π^kis able to reduce bias when there is strong discrepancy between “consensus prior” and data, but it has larger variance than k=13wkπ^kbecause wk(y)depends on random data.

### 5.2. Applications to Phase I trials under logistic regression model

In this section, we apply the prior weighting scheme and the posterior weighting scheme to Phase I clinical trials under the logistic regression model. We consider the three priors considered in Section 4.6. We denote Prior 1, 2 and 3 by Q1, Q2 and Q3, respectively. The three priors had the same hyper-parameters a1,k=1, b1,k=3, a0,k=3, b0,k=1, but they were different by x1,k=4,0,4and x0,k=4,8,12for k=1,2,3, respectively. By the use of the conditional mean prior in Eq. (22), the prior density function of βfor prior Qkis given by

f(β|Qk)(x0,kx1,k)i=10(eβ0+β1xi1+eβ0+β1xi)ai,k(11+eβ0+β1xi)bi,k.E43

The prior means were E(θ.2|Q1)=1.70, E(θ.2|Q2)=1.40and E(θ.2|Q3)=5.38for Priors 1, 2 and 3, respectively.

For simulation study, we consider three simulation scenarios with sample size N= 20. In Scenario 1, we assume β0 = −5 and β1 = .6, so the true MTD is θ.2=6.02, which deviates significantly from all of the three prior means. In Scenario 2, we assume β0 = −3 and β1 = .8 as in Section 4.6, so θ.2=2.02is well surrounded by the three prior means. In Scenario 3, we assume β0 = −1 and β1 = 1.2, so θ.2=.32is close to the most conservative prior mean E(θ.2|Q1)=1.70. We consider the loss function LI(β,xn+1)=(xn+1θ.2)2discussed in Section 4.4, which focuses on individual-level ethics. We use the uniform prior probabilities wk=P(Qk)=1/3for k=1,2,3for implementing both prior and posterior weighting scheme.

Table 2 provides the simulation results of 10,000 replicates for each scenario under the prior weighting scheme and under the posterior weighting scheme. Since the posterior weighting scheme adaptively updates wk(y)based on empirical evidence, it can reduce bias, but it has greater variance in the estimation of θ2. As a consequence, when the true MTD was close to one extreme prior estimate (Scenarios 1 and 3), the use of the posterior weighting scheme yields a smaller E[(πθ^.2.2)2], E(i=120Yi)closer to Nγ=4, and P(3i=120Yi5)closer to one when compared to the use of the prior weighting scheme. In Scenario 3, the average number of adverse events was 4.6 for the posterior weighting scheme, but it was as high as 7.1 in the prior weighting scheme. On the other hand, when the true MTD was well surrounded by the three prior estimates (Scenario 2), the use of the prior weighting scheme yielded more plausible results.

ScenarioMethodE(πθ^.2)V(πθ^.2)E[(πθ^.2.2)2]E(i=120Yi)P(3i=120Yi5)
1Prior weighting0.09670.00140.01211.10900.0398
Posterior weighting0.18530.00730.00752.73040.5900
2Prior weighting0.20180.00590.00593.84320.9042
Posterior weighting0.20480.01100.01104.28480.8920
3Prior weighting0.29290.00710.01577.10900.0568
Posterior weighting0.19510.01330.01334.60360.8646

### Table 2.

Simulation results of 10,000 replicates for the prior and posterior weighting schemes.

The simulation results are analogous to the simpler model in Section 5.1. When the true parameter is not well surrounded by prior guesses, the posterior weighting scheme is preferable with respect to mean square error due to smaller bias. When the true parameter is well surrounded by prior guesses, the prior weighting scheme is beneficial with respect to mean square error due to smaller variance.

As a final comment, we shall be careful about the strength of individual prior elicitations when we implement the posterior weighting scheme in Phase I clinical trials. The strength of individual prior elicitations depends on (i) the hyper-parameters ai,kand bi,k, (ii) the prior weight wk=P(Qk)as well as (iii) the distance between the two arbitrarily chosen doses x0,kx1,k. It can be seen through the expression

f(β)=k=1Kf(β|Qk)P(Qk)k=1Kwk(x0,kx1,k)i=10(eβ0+β1xi1+eβ0+β1xi)ai,k(11+eβ0+β1xi)bi,k.E44

When researchers determine consensus prior elicitations before initiating a trial, the multiplicative term wk(x0,kx1,k)shall be carefully considered together with the hyper-parameters ai,kand bi,k[5].

## 6. Concluding remarks

In this chapter, we have discussed Bayesian inference with averaging, balancing, and compromising in sparse data. In the cancer risk assessment, we have observed that low-dose inference can be very sensitive to an assumed parametric model (Section 3.1). In this case, the Bayesian model averaging can be a useful method. It provides robustness by using multiple models and posterior model probabilities to account for model uncertainty. In the application of Bayesian decision theory to Phase I clinical trials, we have observed that the sequential sampling scheme heavily depends on a loss function. A loss function, which is devised from individual-level ethics, focuses on the benefit of trial participants, and a loss function, which is devised from population-level ethics, focuses on the benefit of future patients. It is possible to balance between the two conflicting perspectives, and we can adjust a focusing point by the tuning parameter (Sections 4.5 and 4.6). Finally, the use of a weighted posterior estimate can be a compromising method when two or more researchers have prior disagreement. We have compared the prior and posterior weighting schemes in a small-sample binomial problem (Section 5.1) and in a small-sample Phase I clinical trial (Section 5.2). The prior weighting scheme (data-independent weights) outperforms when prior estimates surround the truth, and the posterior weighting scheme (data-dependent weights) outperforms when the truth is not well surrounded by prior estimates. One method does not outperform the other method for all parameter values, so it is important to be aware of their bias-variance tradeoff.

chapter PDF
Citations in RIS format
Citations in bibtex format

## More

© 2017 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## How to cite and reference

### Cite this chapter Copy to clipboard

Steven B. Kim (November 2nd 2017). Bayesian Model Averaging and Compromising in Dose-Response Studies, Bayesian Inference, Javier Prieto Tejedor, IntechOpen, DOI: 10.5772/intechopen.68786. Available from:

### Related Content

Next chapter

#### Two Examples of Bayesian Evidence Synthesis with the Hierarchical Meta-Regression Approach

By Pablo Emilio Verde

First chapter

#### Making a Predictive Diagnostic Model for Rangeland Management by Implementing a State and Transition Model Within a Bayesian Belief Network (Case Study: Ghom- Iran)

By Hossein Bashari

We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.