Before the development of the next-generation sequencing (NGS) technology, carcinogenesis was regarded as a linear evolutionary process, driven by repeated acquisition of multiple driver mutations and Darwinian selection. However, recent cancer genome analyses employing NGS revealed the heterogeneity of mutations in the tumor, which is known as intratumor heterogeneity (ITH) and generated by branching evolution of cancer cells. In this chapter, we introduce a simulation modeling approach useful for understanding cancer evolution and ITH. We first describe agent-based modeling for simulating branching evolution of cancer cells. We next demonstrate how to fit an agent-based model to observational data from cancer genome analyses, employing approximate Bayesian computation (ABC). Finally, we explain how to characterize the dynamics of the simulation model through sensitivity analysis. We not only explain the methodologies, but also introduce exemplifying applications. For example, simulation modeling of cancer evolution demonstrated that ITH in colorectal cancer is generated by neutral evolution, which is caused by a high mutation rate and stem cell hierarchy. For cancer genome analyses, new experimental technologies are actively being developed; these will unveil various aspects of cancer evolution when combined with the simulation modeling approach.
- agent-based model
- approximate Bayesian computation
- sensitivity analysis
Cancer is a clump of abnormal cells that originates from normal cells. Normal cells proliferate or stop proliferating depending on their surrounding environment. For example, when skin cells are injured, they proliferate to cover the wound; however, when the wound heals, they stop proliferating. In contrast, cancer cells continue proliferating by ignoring the surrounding environment. Moreover, cancer cells invade surrounding tissues, metastasize to distant organs, and impair functions in the human body.
Malignant transformation from normal to cancer cells generally results from the accumulation of somatic mutations, which are induced by various causes such as aging, ultraviolet rays, cigarette, alcohol, chemical carcinogens, etc. Mutations that contribute to malignant transformation are known as “driver mutations”, whereas genes whose function are impaired by driver mutations are named as “driver genes”. There are two types of driver genes are categorized: “oncogenes” and “tumor suppressor genes”. Oncogenes act as gas pedals for cell proliferation, which are constitutively turned on by driver mutations. Tumor suppressor genes act as brakes to stop cell proliferation, and inhibiting the function of the brakes is necessary for malignant transformation.
Normal cells are transformed into cancer cell when two to 10 driver mutations are acquired. Because these mutations are not induced simultaneously, but rather gradually over a long period of time, this process is known as “multi-stage carcinogenesis” . This process is also regarded as a linear evolutionary process, driven by repeated acquisition of multiple driver mutations and Darwinian selection. Understanding cancer from an evolutionary perspective is important, as therapeutic difficulties against cancer originate from the high evolutionary capacity, which easily endows cancer cells with therapeutic resistance.
Mutations in cancer cells are experimentally detected by DNA sequencing. Next-generation sequencing (NGS) technology, which raised around 2010, enabled cancer genome analysis to comprehensively detect mutations in cancer cells. During the last decade, cancer genome analysis has revolutionized our understanding of cancer . Cancer genome analysis showed that cancer cells harbor a large number of mutations, only a small fraction of which is driver mutations; namely, most mutations in cancer cells are “neutral mutations”, which have no selective advantages (also referred to as “passenger mutations” in paired with driver mutations). By sequencing hundreds of tumor samples from different patients with the same cancer type, the repertories of driver genes were also determined across various types of cancer. Moreover, cancer genome analysis has revealed heterogeneity of mutations within one tumor, which is termed intratumor heterogeneity (ITH) . As described above, carcinogenesis was regarded as a linear evolutionary process until the arrival of NGS; however, ITH is actually generated by branching evolution of cancer cells.
However, cancer genome analysis is not sufficient to explain the origin of ITH. To understand the evolutionary principles underlying the generation of ITH, a simulation modeling approach is useful and increasingly employed in the field of cancer research. In this chapter, we introduce such simulation modeling approaches. We first describe agent-based modeling for simulating branching evolution of cancer cells. We next demonstrate how to fit an agent-based model to observational data obtained by cancer genome analyses, employing approximate Bayesian computation (ABC). Finally, we explain how to characterize the dynamics of the simulation models through sensitivity analysis.
2. Agent-based modeling of cancer evolution
To simulate heterogenous cancer evolution, agent-based modeling is widely employed. An agent-based model assumes a set of system constituents, known as independent agents, and specifies rules for the independent behavior of the agents themselves, as well as for the interactions between agents and the agent environment . The agent-based model is a flexible representation of the model, and given the initial conditions and parameters of the system, the behavior of the system can be easily analyzed by computational simulation. For modeling of cancer evolution, if each cell is assumed to be an agent, ITH can be easily represented by the differences in the internal states of each agent. As an example, we explain an agent-based model named as the branching evolutionary process (BEP) model, which was originally introduced by Uchi
The BEP model assumes that a simulated tumor grows in a two-dimensional square lattice where each cell occupies one lattice point. Initially, cells are initialized as close as possible to the center of the lattice. In a unit time step, along an outward spiral starting from the center, we replicate and kill each cell with probabilities and , respectively. When cells replicate, the BEP model places the daughter cell in the neighborhood of the parent cell, assuming a Moore neighborhood (i.e., eight points surrounding a central point). If empty neighbor points exist, we randomly select one of these points. Otherwise, we create an empty point in any of the eight neighboring points as follows. First, for each of the eight directions, we count the number of consecutive occupied points that range from each neighboring point to immediately before the nearest empty cell as indicated in Figure 1B. Next, any of the 8 directions is randomly selected proportionally with , where is the count of the consecutive occupied points for each direction. The consecutive occupied points in the selected direction are then shifted by one point so that an empty neighboring point appears as shown in Figure 1C. Note that simulation results depend on the order of the division operation in the two-dimensional square lattice. The BEP model first marks cells to be divided and then applies the division operation to the marked cells along an outward spiral starting from the center. In each round on the spiral, the direction is randomly flipped in order to maintain spatial symmetry. An example of such spirals is shown in Figure 1D.
Given that a cell without mutations divides according to this rule, after a normal cell acquires its first driver mutation, which accelerates cell division, the proportion of the clone originating from the cell increases in the whole cell population. By repeating these steps, each cell gradually accumulates driver mutations and accompanying passenger mutations, which do not affect the cell division rate, finally forming a tumor with many mutations. Depending on the parameter values during the course of cancer evolution, each cancer cell can accumulate different combinations of mutations to generate different types of ITH. Figure 2 show an example of snapshots of two-dimensional tumor growth simulated based on the BEP model with an appropriate parameter setting. In this example, driver mutations gradually accumulated in the cells, and a clone with four mutations was selected through Darwinian selection and finally became dominant in the tumor.
The BEP model is a very simple model and has many limitations. Although this BEP model assumes that driver mutations increase the replication probability, it is considered that driver mutations decrease the death probability. The BEP model also assume that each diver mutation has the same effect on the replication probability; however, actual tumors contain different driver mutations of different strengths. Although actual tumors grow in a three-dimensional space, the BEP model assumes tumor growth on the a two-dimensional square lattice; extension to a three-dimensional lattice should be considered as a future improvement. For on-lattice models, various other simulators has been developed for studying tumor growth (off-lattice models which do not assume that tumor growth on the lattice reflects the actual situations more accurately, but are computationally intensive and not commonly used ). For example, the pioneering works of agent-based modeling were performed by Anderson and colleagues [8, 9]. Enderling and colleagues extended the model to incorporate cell differentiation from cancer stem cells where differentiated cells have a limited potential for cell division [10, 11]. Sottoriva
Each group developed a model different from the others, and thus only limited conditions were considered in each study. To address this issue, Iwasaki and Innan  developed a flexible and comprehensive simulation framework named as
3. Fitting the simulation model to observational data
As described in the “Introduction” section, cancer genome analysis demonstrated intratumor heterogeneity and branching evolution of cancer; paticulally, an approach known as multiregion sequencing has been popularly employed for analyzing solid tumors. Here, we introduce a concrete example of a multiregion sequencing study and explain the utility of cancer evolution simulation when combined with multiregion sequencing data.
In multiregion sequencing, multiple samples obtained from physically separate regions within the tumor of a single patient are analyzed (Figure 3A), with two categories of somatic single-nucleotide mutations identified: “founder” and “progressor” mutations (Figure 3B). Founder mutations are defined as present in all regions, whereas progressor mutations are defined as present in some regions (note that they are also referred to using different terms in different studies, e.g., public/private or trunk/branch mutations). Founder mutations are thought to accumulate during the early phases of cancer evolution. The common ancestor clone acquires all founder mutations, and then branches into subclones, which accumulate progressor mutations and contribute to forming ITH. Through these multiregion mutational profiles, we can infer an evolutionary history of the cancer by constructing a phylogenetic tree (Figure 3C).
As a pioneering study, Gerlinger et al.  performed multiregion sequencing, revealing extensive ITH and clonal branching evolution in renal cancer. Ther also identified not only founder mutations in some known driver genes such as
Uchi et al.  also investigated ITH in nine cases of surgically resected late-stage colorectal tumors by multiregion sequencing to identify founder and progressor mutations in each case. Figure 4 shows the results obtained from one of the nine cases, which contains 20 samples from the primary lesion and one sample from the metastatic lesion. Note that the progressor mutations showed a mutational pattern that was geographically correlated with the sampling locations. Moreover, they found that mutation allele frequencies, which can be approximately regarded as the proportion of cells with mutations in each region, tended to be lower for progressor mutations than for founder mutations. This observation suggests that the founder mutations existed in all the cancer cells while the progressor mutations existed in only a fraction of the cancer cells in each region. Thus, even in each region, extensive ITH may have existed, which was not captured by the resolution of multiregion sequencing. In addition, most mutations in known driver genes such as
To fit the simulation model to the observational data, we can employ ABC , which constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters. A common incarnation of Bayes’ theorem relates the conditional probability of a specific parameter value given data to the probability of given by the rule, where denotes the posterior, the likelihood, and the prior. The prior represents beliefs or knowledge about before is available. To obtain the the posterior, the likelihood function is required. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula may be elusive or the likelihood function may be computationally very costly to evaluate. Agent-based models also fall into the latter case. ABC methods bypass evaluation of the likelihood function by using summary statistics and simulations, which widen the realm of models for which statistical inference can be considered. ABC has rapidly gained popularity over the last few years, for analizing complex problems arising in biological sciences, e.g., in population genetics, ecology, epidemiology, and systems biology.
In the basic form of ABC, which is known as rejection sampling, we first sample a parameter value (or a combination of parameter values, if there is more than one parameter) from a prescribed prior distribution of the parameter value. Simulated data are then generated from the sampled parameter value. The similarity between the simulated and observational data is evaluated using summary statistics (typically multiple), which is designed to represent the maximum amount of information in the simplest possible form. If the distance of the summary statistics between the simulated and observational data is below a tolerance parameter, the parameter value is accepted and pooled into the posterior probability of the parameter value. Repeating these steps many times, we can approximate the probability distribution. A conceptual overview of the ABC rejection sampling algorithm is presented in Figure 5.
In the study of colorectal cancer study by Uchi et al. , as summary statistics, they adopted the proportions of founder mutations and unique mutations, which is uniquely observed in each sample, in a multiregion mutation profile. They obtained multiregion mutation profiles for 9 cases with different sample numbers. As the proportions of founder mutations and unique mutations depend on the number of samples, they set the sample number to 5, which is the minimum sample number of the 9 cases, by downsampling the samples in cases containing more than 5 samples. They then estimated the mean of the proportions of founder mutations and unique mutations and used these values as summary statistics values of the observational data (Figure 6; note that although we should apply ABC to each of the 9 case separately, they targeted the population mean for simplicity).
For ABC, they generated simulation data while varying 3 parameters, (the mutation rate), (the number of driver genes), and (the effect of driver mutations), which appear to be critical for simulation results (for strategies used to find such parameters, read the next section). In each simulation trial, we simulated multiregion sequencing from a tumor simulated by the BEP model; a multiregion mutation profile was obtained by digging 5 squares out from a simulated tumor and averaging the mutation status of cells in the squares. From the multiregion mutation profile, the proportions of founder mutations and unique mutations were obtained as summery statistics. They performed 50 simulations for each grid point in a three-dimensional rectangular parameter space; namely, they assumed a uniform prior for each of the three parameters. For each grid point in the parameter space, they calculate the proportion of the simulation instances whose statistics fall within 1 standard deviation from the mean of the values observed in the real multiregion mutation profiles. The distribution of the proportions can be regarded as the posterior and visualized in heat maps (Figure 7).
As a result, when cancer evolution was simulated with the assumption of a high mutation rate, we reproduced mutation profiles similar to those obtained by our multiregion sequencing of colorectal cancers (compare Figure 8A and B with Figure 4A and B). That is, irrespective of the presence of founder mutations, progressor mutations contributed to the formation of a heterogeneous mutation profile, which was geographically correlated with the sampling locations. Moreover, we also reconstructed local heterogeneity, as illustrated by the finding that progressor mutations existed as mutations with lower allele frequencies in each region. Interestingly, although driver mutations were acquired as founder mutations, progressor mutations contained few driver mutations, and most comprised neutral mutations that did not affect the cell division rate. This suggests that, after the appearance of the common ancestor clone with accumulated driver mutations, extensive ITH was generated by neutral evolution. Moreover, the single-cell mutation profiles of the simulated tumor suggest that the tumor comprises a large number of minute clones with numerous neutral mutations accumulated (Figure 8C).
By employing a agent-based model and ABC, Sottoriva et al.  also proposed a Big Bang model of human colorectal tumor growth; in their model, tumors grow predominantly as a single expansion producing numerous intermixed subclones that are not subject to stringent selection, which is consistent with the model developed by Uchi
Although the problem of computational cost generally accompanies ABC, new sampling approaches utilizing Markov chain Monte Carlo and its derivatives  have been developed to overcome this limitation. Moreover, considering the increasing computing power, this problem will potentially be less important. Notably, ABC has many potential pitfalls . For example, setting the tolerance parameter to zero will give accurate results, but typically at a very high computational cost. In practice, therefore, values of greater than zero are used, but this introduces bias. Similarly, sufficient statistics are sometimes not available and other summary statistics are used instead, but this introduces additional bias because of the loss of information. Additionally, prior distributions and choices of parameter ranges are often subject to criticisms, although they are not unique to ABC and apply to all Bayesian methods. Model complexity (i.e., the number of model parameters) is also an important point. If a model is too simple, it can lack predictive power. In contrast, if the model is too complex, there is a risk of overfitting. Moreover, the complex model faces a problem known as the curse of dimensionality, in which the computational cost is severely increased and may, in the worst case, render the computational analysis intractable. When constructing a simulation model, we should follow the Occam’s razor principle: i.e., achieve the lowest model complexity that is sufficient to explain the observational data. To determine the optimal model complexity, we can also employ the model selection scheme based on Bayes factor if a choice of summary statistics is appropriate .
4. Characterizing the dynamics of the simulation model
In the previous section, we explained how to fit a simulation model to observational data. Another direction for studying a simulation is by characterizing the dynamics of the simulation model without observational data. Namely, we can examine parameter dependance by performing a large number of simulations while varying the parameter values. This approach is known as sensitivity analysis and can provide insights into the modeled system as well as identify parameters that are critical for the system dynamics. In sensitivity analysis, as in ABC, we define a summary statistic . A simulation model is then regarded as a function: where are model parameters. The aim of sensitivity analysis can also be considered as characterizing the function “”.
So far, a number of approaches have been proposed for sensitivity analysis. For example, one-factor-at-a-time (OFAT) sensitivity analysis is one of the simplest and most common approaches that changes one parameter at a time to determine the effects on a summery statistic . In OFAT sensitivity analysis, we move one parameter, while leaving the other parameters at their baseline (nominal) values, and then return the parameter to its nominal, which is repeated for each of the other parameters. We then plot the relationship between each parameter and a summary statistic to examine the dependency of the summary statistic on the parameter, or the relationship can be measured by partial derivatives or linear regression. In exchange for its simplicity, this approach does not fully explore the input space, as it does not consider the simultaneous variation of multiple parameters. This means that the OFAT approach cannot detect interactions between parameters.
Global sensitivity analysis aims to address this point by sampling a summary statistic over a wide parameter space involving multiple parameters. Sobol’s method is a popular approach for estimating the contributions of different combinations of parameters to the variance of the summary statistic while assuming that all parameters are independent . The sensitivity of the summary statistic to a parameter is measured by the amount of variance in caused by the parameter and can be expressed as a conditional expectation, , where “” and “” denote the variance and expected value operators, respectively, and denotes the set of all input variables except for . This expression essentially measures the contribution alone to uncertainty (variance) in (averaged over variations in other variables), and is known as the first-order sensitivity index or main effect index. Importantly, it does not measure the uncertainty caused by interactions with other variables. A further measure, known as the total effect index, gives the total variance in caused by and its interactions with any of the other input variables. Both quantities are typically standardized by dividing by . In Sobol’s method, we typically attempt full exploration of the parameter space based on a Monte Carlo method to grasp parameter interactions and nonlinear responses.
However, such approaches appears to be insufficient to comprehensively grasp how the parameters judged to be influential control the behaviors of agent-based models. To overcome this point, Niida
Below I explain an example of sensitivity analysis, which was performed by Niida et al.  to understand the precise mechanisms underlying neutral evolution induced by a high mutation rate. First, they built an agent-based model, referred to as the “neutral” model, for simulating neutral evolution in cancer. Although the neutral model is similar to the BEP model, the neutral model assumes only neutral mutations and omits spatial information. They also improved the approach used for mutation accumulation in the BEP model. Namely, in the neutral model, they considered only neutral mutations that did not affect cell division and death. In a unit time, a cell divides into two daughter cells with a constant probability without dying. In each cell division, each of the two daughter cells acquires neutral mutations. They assumed that neutral mutations acquired by different division events occur at different genomic positions.The simulation started from one cell without mutations and ended when the population size reached or time reached .
Through sensitivity analysis based on the MASSIVE method, they confirmed that the mutation rate is the most important factor affecting neutral evolution (Figure 9). As a summary statistic for evaluating ITH, they calculated
Thus far, several theoretical and computational studies have shown that a stem cell hierarchy can boost neutral evolution in a population of cancer cells [12, 27]; based on this, they extended the neutral model to the “neutral-s” model such that it contains a stem cell hierarchy (Figure 10). The neutral-s model assumes that two types of cell exist: stem and differentiated. Stem cells divide with a probability without dying. For each cell division of stem cells, a symmetrical division generating two stem cells occurs with a probability , whereas an asymmetrical division generating one stem cell and one differentiated cell occurs with a probability . A differentiated cell symmetrically divides to generate two differentiated cells with a probability but dies with a probability . The means of accumulating neutral mutations in the two types of cell is the same as that in the original neutral model, which means that the neutral-s model is equal to the original neutral model when or . For convenience, they define and hereinafter use rather than .
MASSIVE analysis of the neutral-s model confirmed that incorporation of the stem cell hierarchy boosts neutral evolution (Figure 11). To obtain the heat map in Figure 11A, the ITH score was measured while and were changed, whereas and were maintained as constant. In the heat map, a decrease in leads to an increase in the ITH score when (i.e., ). A smaller value of means that more differentiated cells are generated per stem cell division, and means that the population of differentiated cells cannot grow in total, which is a valid assumption for typical stem cell hierarchy models. That is, this observation indicates that the stem cell hierarchy can induce neutral ITH even with a relatively low mutation rate setting (i.e., ), with which the original neutral model cannot generate neutral ITH.
The underlying mechanism boosting neutral evolution can be explained as follows. Only stem cells were considered for an approximation, as differentiated cells do not contribute to tumor growth with . While one cell grows to a population of cells, let cell divisions synchronously occur across generations during the clonal expansion. Then, holds because the mean number of stem cells generated per cell division is estimated as . Solving the equation for gives ; that is, it can be estimated that during clonal expansion, each of the cells experiences cell divisions and accumulates mutations on average. They confirmed that the expected mutation count based on this formula fit well with the values observed in their simulation (data not shown). These arguments mean that a tumor with a stem cell hierarchy accumulates more mutations until reaching a fixed population size than does a tumor without a stem cell hierarchy. That is, a stem cell hierarchy increases the apparent mutation rate by -fold, which induces neutral evolution even with relatively low mutation rate settings.
Recent genomic analysis demonstrated that multiple evolutionary modes exists in cancer systems. For example, as described above, ITH in renal cancer is generated by Darwinian selection, which is in contrast to neutral evolution in colorectal cancer. Moreover, by multiregion sequencing of early-stage colorectal tumors, Saito
Sensitivity analysis also provides insight into metastatic tumor progression, which is poorly understood despite its clinical importance. Evaluation of genomic divergence between paired metastatic and primary tumors (M-P divergence) from multiregion sequencing is a good starting point for addressing this problem. Sun and Nikolakopoulos  extended
In this chapter, we introduced agent-based modeling of cancer evolution along with methodologies for data fitting and sensitivity analysis. Although there is a long history of theoretical science in the field of cancer research, this approach has been overshadowed by experimental science until recently. However, with a recent explosive increase in cancer genome data, there is now an increasing need to integrate experimental and theoretical science. As an example, this chapter introduced methods for modeling and analyzing the evolutionary processes generating ITH, which is experimentally observed by multiregion sequencing. We also presented exemplifying applications: e.g., agent-based simulation modeling and analysis successfully demonstrated that ITH in colorectal cancer is generated by neutral evolution, which is caused by a high mutation rate and stem cell hierarchy. For cancer genome analyses, new experimental technologies are actively being developed. For example, single-cell sequencing technologies can profile IHT at the ultimate resolution  while liquid biopsy technologies, such as the sequencing of circulating tumor DNA, enables us to non-invasively track cancer evolution during treatment . These technologies will unveil more various aspects of cancer evolution when combined with the approach introduced in this chapter. This chapter also exemplified how simulation modeling helps to solve scientific problems raised by new experimental technologies. We hope that this chapter will provides readers with some hints to solve their own problems using simulation modeling.
This work was supported by the JSPS KAKENHI (19K12214) and AMED (JP21cm0106504).