## 1. Introduction

### 1.1. Definition of big data and each clinical application overview

Trifiletti et al. [1] describe the big data as follows: a lot of information and massive data sets or number of grains of sand in the earth for human analysis with 10^{12}–10^{18} bytes [1].

Murdoch listed that the big data are the inevitable application in healthcare field as four things [10]:

Expanding capacity to create new knowledge

Helping with knowledge dissemination

Translating personalized medicine in clinical practice with EHR data

Allowing for a transformation of health care by transferring information to patient [10]

This trend is called to be “big bang” to adapt and research for big data and machine learning in medicine. Especially, machine learning is widely used [4–6]. Radiotherapy is a treatment method using radiation for cancer treatment based on a patient treatment planning for each radiotherapy machine. At this time, the dose, volume, device setting information, complication, tumor control probability, etc. are considered as a single-patient treatment for each fraction during radiotherapy process. Thus, these filed-up big data for a long time and numerous patient cases are inevitably suitable to produce optimal treatment and minimize the radiation toxicity and complication. Thus, we describe various clinical cases and key machine learning algorithms in radiation oncology in this chapter.

First, what is the big data for a single patient in hospital? The data type and its size for each patient can be summarized in
**Table 1**
. In case of radiation oncology, imaging and treatment planning information could be a major treatable data [15].

Second, we would like to explain radiation treatment planning and decision support system in radiation oncology. When we set up treatment planning with parameters for patient cure in radiotherapy, it is based on the radiation treatment planning (RTP) system. The clinical target volume (CTV) and planning target volume (PTV) have to be targeted by maximum radiation, and critical organs have to be radiated by minimum. It is established based on the correlation between the dose and volume, also known as dose-volume histogram (DVH). At this process, considered parameters are the prescription dose (PD), dose distribution, dose fractionation, dose constraints at normal tissue, target volume, treatment machine setting values, etc. [2, 16].

Third, when the finish treatment planning has been completed, the DVH is acquired. The dose-volume distribution will be the basic information whether it could be use or not. But, these limited information do not give hot spot for target volume, conformity, homogeneity, and so on. And, the tumor control probability (TCP) and normal tissue complication probability (NTCP) have to be analyzed in parallel. As the knowledge-based judgment, other rival plans could be generated again [32]. Thus, some decision support system is needed to select the best treatment plan for personalized patient care. These decision support systems (BIOPLAN, CERR, DRESS, Slicer RT, etc.) that provide different functions to analyze treatment efficiency. And these were being researched and studied as the software program since the early 2000s to up to date [3, 26–28].

But now, these decision support systems are needed to add to specific function using machine learning and historical treatment results and previously mentioned big data information to predict patient toxicity or complication after radiation treatment.

## 2. Clinical application using big data in radiation oncology

### 2.1. Prostate cancer

Çınar et al. [25] describe prostate cancer as follows:

Prostate cancer occurs most frequently in men over 50.

Prostate cancer is currently most common in men except lung cancer [25].

Thus, this clinical application is meaningful to deal with machine learning in big data. Coates et al. [4] studied the integrated big data research for prostate cancer in radiation oncology. The parameters are dose-volume metrics (EUD), clinical parameter [gastrointestinal (GI) toxicities or rectal bleeding and genitourinary (GU) toxicities or erectile dysfunction (ED)], spatial parameters (zDVH), biological variables (genetic variables), etc., and the risk quantification modeling of TCP and NTCP has performed. These modeling methods are various, and the neural network and kernel-based methods are widely used.
**Figure 1**
shows that the toxicity prediction results using principle component analysis (PCA) [4].

De Bari et al. [5] have done the pilot study for the prediction of pelvic nodal status using machine learning of prostate cancer. A 1555 cN0 and 50 cN+ prostate cancer patients enrolled, and decision tree and machine learning algorithm were used to study for performance results of Roach formula and Partin table. The accuracy, specificity, and sensitivity ranging between 48–86%, 35–91%, and 17–79%, respectively, were showed through this study (
**Figure 2**
).

In addition, several analysis articles have been reported for prostate cancer with index results, which could be the example for adding above machine learning algorithm in the next step [30, 31].

### 2.2. Lung cancer

Das et al. [6] describe radiation-induced pneumonitis as a serious problem around thorax including the lung as follows:

Important problem for the incident radiation to the adjacent or surrounding normal lung.

Occurrence of high grade in 15–36% with retrospective studies.

Das et al. [6] conducted prediction modeling based on 234 lung cancer patients and Lyman normal tissue complication probability (LNTCP) by decision tree analysis.
**Table 2**
shows injury prediction by various settings for a male patient.

RT, radiotherapy; LNTCP, Lyman normal tissue complication probability.

### 2.3. Head and neck cancer

Head and neck cancer patients undergo anatomical change during radiotherapy for a few weeks. Thus, kilovoltage cone-beam computed tomography (kV-CBCT) and mega-voltage computed tomography (MVCT) combined with a linear accelerator (LINAC) permit to control patient’s daily anatomical change for treatment fractions in recent radiotherapy [7]. The adaptive radiotherapy (ART) could fix the anatomical variation for the patient through the dose distribution adjustment. Finally, reducing unexpected toxicity can be possible. But, This ART accompanies time and labor for daily setup about the variation fixing. At this time, when replanning has to be done daily/weekly for numerous patients, then it is laborious and time-consuming for this process.

Guidi et al. [7] studied the prediction of replanning benefit using unsupervised machine learning on retrospective data considering this process and patient characteristics.
**Figure 3**
is the algorithm architecture for this study. From the DVH input, clustering which classifies into data group, support vector machine (SVM) training which analyzes the parotid gland, and clinical acceptance level with test and output process are shown in
**Figure 3**
[7]. Thus, the results suggest that the replanning for 77% patients is needed because the significant morpho-dosimetric changes affect them when the fourth week of treatment starts.

## 3. Machine learning methodology

When the machine learning method has to be selected in radiation oncology, input and output variables are considered to predict expected analysis results by accuracy validation. Kang et al. [14] describe the principles of modeling as follows (
**Figure 4**
).

### 3.1. Machine learning introduction

Ethem Alpaydin [8] defines machine learning as the computer program for optimizing performance factor using data, and Mitchell also describes that a computer program can be said to be learned in experience (E), task (T), and performance (P) [9].

A machine learning algorithm can be divided into the unsupervised learning and supervised learning [8, 11]. For unsupervised and supervised learning process is little different as with training and test in
**Figure 5**
. A differentiation is the feedback loop for training and test difference between supervised and unsupervised learning in
**Figure 5(a)** and
**(b)**.

### 3.2. Supervised learning

A supervised learning is a machine learning method to find a result from training data. For example, we know beforehand about the doughnut and bagel classification group. Doughnut is classified from the training. Then, we classify the group whether this doughnut belongs to doughnut group or bagel. This is the example of supervised learning.

Generally, the training data include input characteristics with vector type; the vector presents wanted results. Thus, this continuous trial showing the result process is the regression. A classification is the division of input vector whether this value comes from several groups. When the supervised learner is executed, training data have to be measured by proper method to achieve final goal. The accuracy and validation for classification are needed to count numerically to measure its performance.

#### 3.2.1. Decision tree

A decision tree consists of node and branch. If the nodes have more complicated hierarchy, leaf nodes and braches follow by certain decision. Thus, a diagram formed into the unknown condition at the nodes and the decision “yes” or “no” goes to a direction in a tree. This is beneficial to trace for a created hypothesis with the results.
**Figure 6**
shows that a decision tree and it is shown that its rules for their conditions whether patient characteristics about chemotherapy, cell, treatment, and sex for RT radiotherapy.

A hyperplane h(x) defines Eq. (1) for the points x [12]:

where w is the weight vector and b is the offset. The generic form of a separate point for a numeric attribute X_{i} is given in Eq. (2):

where v = −b is the certain value in the domain of X_{i}. The decision point X_{i} ≤ v thus divides R, the input data space into two regions R_{YY} and R_{NN}. Each split of R into R_{YY} and R_{NN} also induces a binary partition of the corresponding input data point D. That is, a split point of the form X_{i} ≤ v induces the data partition in Eqs. (3) and (4):

where D_{YY} is the subset of data points that lie in region R_{YY} and D_{NN} is the subset of input points that line in R_{NN} [12].

#### 3.2.2. Support vector machine

A support vector machine (SVM) is a machine learning method for pattern recognition and information analysis. Generally, it is used for classification and regression analysis. The SVM makes the decision about input data to determine whether a given set of data belongs to any category. For understanding the SVM, data group and hyperplane terms have to be defined.

A hyperplane in d dimensions is given as the set of all points x ∈ Rd that satisfies the equation h(x) = 0, where h(x) is the hyperplane function, defined as follows in Eq. (5) [12]:

Here, w is the d dimensional weight vector and b is the scalar, called the bias. For points that lie on the hyperplane, it gives us Eq. (6):

The hyperplane is defined as the set of all points w^{T}x = −b. If the input data group is linearly able to classify, then a dividing hyperplane h(x) = 0 could be found for all points classified as yi = −1, h(xi) < 0 and for all points classified as yi = +1, thus h(xi) > 0:

The weight vector w can be designated at the direction that is normal to the hyperplane, however, b; the bias fixes the offset of the hyperplane in the d-dimensional space. Because w and −w are normal to the hyperplane, the vagueness that h(xi) > 0 where yi = 1 and h(xi) < 0 where yi = −1 can be removed.

Thus, let **x**p be the orthogonal projection, x the hyperplane, and let **r**
_{
1
} = **x** − **x**p:

where r is the directed distance of x from x_{p}, r_{1} is the x from x_{p},

r_{1}: + when r_{1} is in the same direction as w; r_{1} : – when r_{1} is in an opposite direction to w (
**Figure 7**
) [12].

In case of nonlinear SVM, the classes are not separable by linear SVM. The shape is in
**Figure 8**
, and some kernels include polynomial, Gaussian, etc.

There is the library for various programming languages using the support vector machine in
**Table 3**
.

#### 3.2.3. Neural network

A neural network example in radiation oncology is shown in
**Figure 9**
. A three-layer neural network defines as follows, and this would have the following model for the approximated function as [11]

where the elements are the output of the neurons:

(where x: the input vector; w^{(j)}, b^{(j)}: the interconnect weight vector, and j: the bias of layer)

### 3.3. Unsupervised learning

Unsupervised learning, otherwise supervised learning, does not know the specific group information. But the learning algorithm infers the results such as doughnut and bagel example. That is, there is no target value in unsupervised learning. It is related to density estimation on statistics. This unsupervised learning is beneficial to data characteristics analysis and its explanation. Typical example is clustering. Another one is an independent component analysis.

#### 3.3.1. Principal component analysis (PCA)

Zaki and Wagner Meira defined the PCA as follows:

Finding r-dimensional basis that take the data variance.

It is called that the largest projected variance direction is the first principal component.

In case of orthogonal direction, then it is the second principal component and so forth.

And also, the mean squared error can be minimized by maximizing the data variance [12].

Principal component analysis (PCA) is applied to the normalized X to identify a set of principal components (PCs) [11]:

where UΣV^{T} is the singular value decomposition of X.

#### 3.3.2. Clustering

Clustering is an unsupervised learning method, and that is finding the cluster without data label. The data and data label are required to classify. Thus, it needs different classification methods for unlabeled data. There are several ways to define cluster. One simple way is that we can define as “the data in same cluster inside” is close to each other, and the closest distance data could be selected. k-Means assume the data is close in same cluster. One center exists, and cost which is a distance between center and each data can be defined. Thus, k-means is an algorithm to reduce and minimize cost in cluster.

Given a clustering C = {C_{1}, C_{2}, …, C_{k}}, the scoring function evaluates its quality. This sum of squared error scoring function is defined as [12]

The goal is to find the clustering that minimizes the SSE score, thus,

k-Means employs a greedy iterative approach to find a clustering that minimizes the SSE objective [12].

Here is the advantage and disadvantage of various machine learning algorithms in radiation oncology in
**Table 4**
.

## 4. Conclusion

We summarized various clinical applications such as head, neck, lung, and prostate cancer using machine learning algorithm in radiation oncology [13, 18, 19]. And those machine learning algorithm introductions and several definitions were listed. For the precision medicine in radiation oncology, radiation toxicity and complication factors are inevitable parameters for patients after radiotherapy. The dose-volume distribution will be the basic information, but this limited information does not give the tumor control probability (TCP) and normal tissue complication probability (NTCP) and grade level. Thus, some decision support system is needed to select the best treatment plan for personalized patient care. But now, although this decision support system is needed to add specific function using machine learning and historical treatment results and previously mentioned big data information to predict patients toxicity or complication after radiation treatment [29].

Another current big data trend is the research for the medical imaging such as DICOM RT in radiotherapy. The images have a lot of information for current patient status and future undergoing information as prediction of patient’s quality of life. Thus, lung cancer and breast cancer applications are good applications in case of using simple chest X-ray or low-cost imaging method for big data research in clinical application.

Thus, we explain a predictive solution of radiation toxicity based on the big data as treatment planning decision support system in
**Figure 10**
. From this block diagram, the input part gives treatment data (i.e., rival plans with DVH) through a radiation treatment planning system. After this process, the dosimetric and biological index analysis process is performed by program. The normal tissue complication probability (NTCP) model could be adaptable, and it is used to consider central lung distance (CLD) and maximal heart distance information to be measured such as two-dimensional radiation therapy indicators between the three-dimensional conformal radiation therapies in case of lung cancer. Dose-volume relationship and tolerance dose in organ-at-risk information are analyzed by some machine learning algorithm in decision support system. At this time, numerous patient treatment “big data” could be used to evaluate machine learning results and predict toxicity and normal tissue complication versus know-based approach. Thus, this will be the evidence-based decision to finalize treatment plan for customized patient cure [20–24].

Therefore, current decision support system can be modified and developed to predict complication and toxicity after radiotherapy by adding not only dosimetric index and biological index function but also clinical big data analysis with various machine learning algorithms. This is the fusion solution for customized patient cure method in big data era in radiation oncology.