Open access peer-reviewed chapter

MultiRes Attention Deep Learning Approach for Abdominal Fat Compartment Segmentation and Quantification

Written By

Bhanu K.N. Prakash, Arvind Channarayapatna Srinivasa, Ling Yun Yeow, Wen Xiang Chen, Audrey Jing Ping Yeo, Wee Shiong Lim and Cher Heng Tan

Submitted: 14 February 2023 Reviewed: 06 April 2023 Published: 11 May 2023

DOI: 10.5772/intechopen.111555

From the Edited Volume

Deep Learning and Reinforcement Learning

Edited by Jucheng Yang, Yarui Chen, Tingting Zhao, Yuan Wang and Xuran Pan

Chapter metrics overview

76 Chapter Downloads

View Full Metrics

Abstract

Global increase in obesity has led to alarming rise in co-morbidities leading to deteriorated quality of life. Obesity phenotyping benefits profiling and management of the condition but warrants accurate quantification of fat compartments. Manual quantification MR scans are time consuming and laborious. Hence, many studies rely on semi/automatic methods for quantification of abdominal fat compartments. We propose a MultiRes-Attention U-Net with hybrid loss function for segmentation of different abdominal fata compartments namely (i) Superficial subcutaneous adipose tissue (SSAT), (ii) Deep subcutaneous adipose tissue (DSAT), and (iii) Visceral adipose tissue (VAT) using abdominal MR scans. MultiRes block, ResAtt-Path, and attention gates can handle shape, scale, and heterogeneity in the data. Dataset involved MR scans from 190 community-dwelling older adults (mainly Chinese, 69.5% females) with mean age—67.85 ± 7.90 years), BMI 23.75 ± 3.65 kg/m2. Twenty-six datasets were manually segmented to generate the ground truth. Data augmentations were performed using MR data acquisition variations. Training and validation were performed on 105 datasets, while testing was conducted on 25 datasets. Median Dice scores were 0.97 for SSAT & DSAT and 0.96 for VAT, and mean Hausdorff distance was <5 mm for all the three fat compartments. Further, MultiRes-Attention U-Net was tested on a new 190 datasets (unseen during training; upper & lower abdomen scans with different resolution), which yielded accurate results. MultiRes-Attention U-Net significantly improved the performance over MultiResUNet, showed excellent generalization and holds promise for body-profiling in large cohort studies.

Keywords

  • MultiRes attention
  • deep learning
  • fat compartments
  • abdomen
  • subcutaneous fat compartments
  • visceral fat

1. Introduction

Obesity is a globally growing epidemic which has affected more than 2 billion adults, and many teens (18 years plus) are overweight, of which 650 million are obese [1]. Anthropometric measurements, waist-to-hip ratio, body mass index (BMI), waist circumference, does not explicitly distinguish fat mass, and quantity of fat present in visceral, and subcutaneous compartments. Literature, highlights that accumulation of fat leads to insulin resistance, oncologic and cardiovascular diseases [2, 3, 4] affecting the quality of life. Hence, body composition analysis to determine the amount of adipose and muscle tissue is of medical importance for obesity risk analysis. Magnetic resonance imaging (MRI) and computed tomography (CT) can characterize fat and non-fat tissues [5]. Among the imaging modalities, MR is more efficient in tissue characterization compared to CT for quantification of body fat volume [6, 7]. By quantifying different fat compartments from the imaging scans, we can perform body composition analysis. Manual quantification of fat and muscle volumes from the imaging scans is tedious and time-consuming, leading to loss clinical man-hours.

Anatomically, the subcutaneous adipose tissue compartments (superficial: SSAT and deep: DSAT) are separated by thin fascia, whereas the visceral adipose tissue (VAT) is found in-between internal and external abdominal boundaries. VAT is around the internal organs and discontinuous whereas SAT (SSAT+DSAT) is continuous. Fat depots are irregular in shape, lack texture, and vary across abdominal profile as demonstrated in Figure 1 making it a challenging medical image segmentation task. Several semi-automated methodologies have been developed to reduce time and reduce bias [8, 9, 10, 11, 12]. These methodologies are less reliable and offer low accuracy as they depend on expert knowledge for fine-tuning image parameters.

Figure 1

Illustration of fat depots of SSAT (red), DSAT (green), and VAT (blue) varying shape, size across the abdominal profile.

Deep learning for image segmentation [13] has found many applications in medical image analysis and one such application is abdominal fat compartment segmentation. Several fat quantification studies use single contrast DIXON MR scan and 2D/3D U-Net architecture [14, 15] for SAT and VAT segmentation. Enhancement versions of Standard U-Net such as Competitive Dense Fully Convolutional Network (CDFNet), nnUNet, and Dense Convolutional Network (DCNet), which can handle complex image features, have been used for adipose tissue segmentation [16, 17, 18]. Attention gate model [AG] in 2D and 3D U-Net [19] has gained popularity in adipose tissue segmentation task as AG focuses on target structures of varying shapes and sizes by suppressing irrelevant regions and highlighting useful salient features [20, 21]. Ibtehaz et al. proposed a MultiRes block to address multiscale issues and ResPath to reduce adverse learning of features which might lead to a false prediction by skip connection of U-Net [22].

1.1 Study proposition

In our previous work on adipose fat depot segmentation, we had proposed patch-based 3D-ResUNet Attention [23] for fat depot segmentation, The patch-based framework failed to handle (i) different body compositions like lean, and moderately obese due to fixed patch sizes, and (ii) generalize to unseen abdominal region segmentation due to cataphoric forgetting of network, anatomical differences, and class imbalance. Figure 2 illustrates a few failed cases from our previous work. Hence to overcome these drawbacks, we focused on the enhancement of MultiResUNet [23] by proposing a MultiRes-Attention U-Net architecture, with

  1. a hybrid loss function to handle class imbalance, and

  2. attention gates for focused learning and improved prediction accuracy.

Figure 2

Illustration of failed cases of our previous work on patch-based 3D-ResUNet attention vs. proposed architecture.

In this study, we also compare the performance of the proposed architecture against standard U-Net and MultiResUNet.

Advertisement

2. Materials and methods

2.1 MR data acquisition

Data sets of 190 elderly Asians (aged >50 years, residing within the community) who participated in characterization of early sarcopenia to assess functional decline study was used in our study [24]. The MR abdominal scans were acquired using a 3D modified breath-hold T1-weighted Dixon sequence. Subjects were advised a 20 s breath hold during the scans. The scans were performed on a 3T Siemens Magnetom Trio MRI scanner with TR/TE/FA/Bandwidth: 6.62 ms, 1.225 ms, 100, and 849 Hz/pixel, respectively. The study group consisted of mainly Chinese (91.6%) ethnicity having mean age was 67.85 ± 7.90 years, BMI 23.75 ± 3.65 kg/m2, and predominantly female (69.5%) subjects. As the study subjects were elderly, many had common comorbidities such as hypertension, diabetics, and hyperlipidemia. National Healthcare board reviewed the cohort study with written consent from all subjects.

Data set can be considered as heterogeneous as it included (i) subjects from different ages (ii) scans covering different anatomical regions—thoracic, lumbar, and sacral (iii) variations in fat accumulation in different compartments based on body composition and (iv) acquisitional variations like—image dimensions, slice thickness, breathing/motion artifacts, etc.

Manual (radiology experts) ground truths were generated in 26 data sets out of 190 scans covering L1-L5 regions. The data with ground truths were subjected to MR-acquisition based data augmentation to scale the number from 26 to 130 to create training data sets.

2.2 Fat segmentation

A 3-stage segmentation framework was envisaged to quantify abdominal fat depots (i) Preprocessing stage which included (a) arm region removal, (b) data augmentation to increase the number of data sets, and (c) conversion of 3D MR images into 2D slices; (ii) Segmentation stage—“MultiRes-Attention U-Net” architecture for segmentation of abdominal regions into SSAT/DSAT/VAT (three class) regions and (iii) Postprocessing stage—image reconstruction 2D to 3D and fat depot quantification.

2.3 Preprocessing

All the training/testing data were subjected to quality check to assess motion artifacts originating from breathing, and fat-water swaps. Auto-check was developed to ensure training dataset slices match with the marked ground-truth slices. Arm region artifacts were removed automatically using the projection method [21]. Four different data augmentations were performed once before training these included (i) Random Noise (ii) Random Ghosting (iii) Random Bias Field (iv) Blur augmentation [23] to increase the total number of datasets. Finally, 3D MR scans were converted to 2D slices for training/testing the proposed deep learning architecture.

2.4 MultiRes-attention U-Net

In standard deeper convolutional network, input data goes through multiple convolutions to obtain salient spatial features leading to vanishing gradient problem. The architectures like ResNet [25] adopt summation of connect of all preceding feature maps leading to high memory demanding network. DenseNet [26] introduces “dense connections”, where each layer in the network is connected to every other layer, instead of only connected to previous layers as in standard network architecture but fail to handle multi-scale issue. To handle multi-scale issue of fat depots which vary in shape, size, and improve semantic segmentation which is memory efficient.

We proposed MultiRes-Attention U-Net which is a modified version of MultiResUNet with attention which contains (i) MultiRes block, (ii) ResAtt-Path, and (iii) Attention gate model.

2.5 MultiRes block

Two sequential convolutional layers at each level in U-Net [24] are substituted with a proposed MultiRes block (similar to dense block in denseNet [26]) with the residual path, (as in ResNet [25]) as shown in Figure 3. multiRes block contains Inception-like modules with parallel convolution filters of 3×3, 5×5, and 7×7 to capture spatial features from different scales. However, they are not memory efficient. To reduce the memory, we factorized a large filter into a sequence of 3×3 filters with a gradual increase in the number of filters at each layer as shown in Figure 3.

Figure 3.

Proposed MultiRes-attention U-Net architecture with MultiRes Block, ResAtt-path and attention gate block at the decoder to aggregate attention features.

2.6 ResAtt-path

Skip connections of standard U-Net are modified as ResAtt-Path by including non-linear convolution filters of 3×3 and a residual path with 1×1 filters. The number of convolution filters (3×3) reduces in each level of the encoding section of U-Net as shown in Figure 4. These ResAtt-Path overcomes the drawback of U-Net short connections by merging of low and high levels features at the decoder. The ResAtt path connects the U-Net encoder at each level to the attention modules in the decoding section of U-Net.

Figure 4.

Description of (a) MultiRes block, (b) ResAtt-path and (c) attention gated block of MultiRes-attention U-Net architecture.

2.7 Self-attention

Soft attention gates (AGs) proposed by Oktay et al. [20] assist the model to focus on regions of interest by suppressing irrelevant location-based feature activations. AGs ensure that only salient spatial information is carried across skip connection which improves the network performance in false positives reduction. Soft attention gates (AGs), as shown in Figure 3(c), and illustrated in Eq. (1) contains two inputs (i) Iplower-level block input and, (ii) IRResAtt-Path from the proposed skip connection layer. Ip input is fed into 1×1 convolution filter for upsampling to match the dimensions of the inputs as illustrated in Eq. (2). The dimension matched inputs xattentionandxupsampled are combined and passing through a ReLU activation function and sigmoid activation functions to yield a coefficients with values between 0 and 1.

Finally, these coefficients are upsampled through trilinear interpolation to generate the soft attention feature map. Which is then multiplied by the ResAtt-Path’s skip connection to produce the final output as shown in Eq. (3)

xattention=Soft AttentionIpIRE1
xupsampled=UpsampleIpE2
output=ConvBlockconcatxattentionxupsampledE3

2.8 Loss function

Segmentation model performance not only depends on the architecture of the network but also on the choice of the loss function [27] particularly in the scenario where there is a high-class imbalance. As we observed imbalance in SSAT, DSAT, and VAT distributions, we identified focal dice loss function as an appropriate loss function that handles class imbalance issues. The focal dice loss incorporates the focal loss where γ=0.5Eq. (4) and dice loss Eq. (5) together making it a robust loss function for the imbalanced class problems. It makes use of weighted components for each class based on their representation.

Focal loss=1ρtγlogρtE4
Dice loss=1dice coefficient=12ABA+BE5

2.9 Post processing

Fat sub-region volumetric analysis & sub-region volume percentage is computing using Eqs. (6) and (7)

Vr=TPssat+TPdsat+TPvatIr1000E6

where TPssat,TPdsat,TPvat correspond to predicted voxel count of SSAT, DSAT and VAT classes & Ircorresponds to each subject’s voxel resolution. Sub-regions volumes percentage is computed using Eq. (7), where TPi is the true positive volume of class i, and TPv is the total volume of the fat region.

%Vc=TPiTPv100E7

2.10 Training parameters

Single contrast fat-only 3D MR Dixon scans were converted to 2D slices for training (approximately 8000, 2D slices). Training was conducted on ubuntu 18.04 LTS operating system with NVIDIA Titan X GPU card with code written using TensorFlow framework [28] with hyperparameters of MultiRes-Attention U-Net is shown in Table 1.

Training parametersValue
Number of filters at each levels of U-Net16,32,64,128,256
Epochs150
OptimizerADAM [29]
Learning rate0.00001
Loss functionFocal dice loss
Weighted decay2e-6
Dropout0.05
Patience15

Table 1

Illustrating the hyperparameters values in training MultiRes-attention U-Net.

2.11 Performance analysis

Multiclass Dice ratio (DR) & Hausdorff distance were two performance matrices used to evaluate the fat subregions segmentation which comprising of SSAT, DSAT and VAT regions.

The similarity between predicted and ground truth segmentation results is assessed by measuring the overlap using multiclass Dice score as illustrated in Eq. (8).

DSIk=IpredIgt==k==k2.0IpredIpred==k==k+IgtIgt==k==kE8

where DSIk is the subclass DSI value ranging between 0 and 1, where 1 means complete overlop of subregion, Ipred is the predicted output, Igt is the ground truth, and k is the number of classes.

Hausdorff Distance (HD) measures as the distance between two compact non-empty subsets of a metric space [30]. In order to find similarity between predicted (Pred) and ground truth (GT) HD measure between two closed and bounded subsets A and B of a given metric space M is defined as.

HDPredGT=maxhPredGThGTPredE9
hPredGT=maxdistαPredGTE10
distαPredGT=minμαPredGTE11

where HDPredGT is the direct distance between Predicted region and ground truth, distαPredGT is the distance from point to region GT and μαGT is a point distance in the metric space. The smaller HD(Pred,GT) indicates better segmentation accuracy i.e., less mismatch area.

Advertisement

3. Results

Accurate fat depot segmentation plays a significant role in evaluating fat distribution which can be used as biomarkers to assess metabolic syndrome and obesity. Table 2 illustrates the training and testing Dice statistical index (DSI) (Mean ± SD) for MultiRes-Attention U-Net, MultiResUNet, and standard U-Net’s 3-class (Class 1: Superficial Fat, Class 2: Deep-Superficial Fat, Class 3: Visceral fat) segmentation accuracies with trained on focal dice loss functions.

DSI score for training (focal dice loss)SSATDSATVAT
U-Net0.9090±0.0230.8727±0.0350.8048±0.113
MultiResUNet0.9751±0.0210.9732±0.0230.9679±0.017
MultiRes-Attention U-Net0.9877±0.0220.9852±0.0240.9758±0.022
DSI Score for TestingSSATDSATVAT
U-Net0.9071±0.0200.8660±0.0430.7426±0.140
MultiResUNet0.9706±0.0300.9657±0.0350.9586±0.017
MultiRes-Attention U-Net0.9781±0.0290.9718±0.03490.9711±0.015
Hausdorff DistSSAT HD (mm)DSAT HD (mm)VAT HD (mm)
U-NET4.8385 ± 0.0234.5830 ± 0.42025.5176 ± 0.113
MultiResUNet4.232 ± 0.1214.323 ± 0.32424.332 ± 0.765
MultiRes-Attention U-Net4.132 ± 0.8684.199 ± 0.6564.223 ± 0.133

Table 2.

Performance comparison of models.

Dice score (Table 1) indicated that all the models show improved segmentation accuracy when trained under focal dice loss function.

Advertisement

4. Discussion

The removal of the arm region is an important step in pre-processing as it contains SAT, which may interfere with automatic segmentation. MR-based data augmentation techniques were used to increase the training samples and improve the generalization of the model. In this study, we have proposed a MultiRes-attention U-Net for the segmentation of the three abdominal fat compartments namely superficial subcutaneous fat, deep subcutaneous fat and visceral fat.. Algorithm took about 5 s to accurately segment and quantitate all the 3 different fat compartments thus reducing the time significantly. This enables the usage of our algorithm for clinical routines and large clinical trials.

Based on Table 1, the proposed algorithm performs better and provides a more accurate segmentation output than MultiResUNet due to the introduction of the AG module. Introduction of the attention module improved the identification of significant features such as fascia boundary and smaller VAT components around the spine and preventing the network from learning false positive information. Focal dice loss function was found to be more appropriate in improving the overall segmentation results compared to cross-entropy (CE) loss and dice. Experimental results showed that focal-dice loss function could handle inherent class imbalance (amount of SSAT/DSAT/VAT in different slices) where cross-entropy or dice loss functions failed. The mean focal dice loss DSI for the test dataset was about 97.81% for SSAT, 97.18% for DSAT, and 97.11% for VAT, which is a significant improvement by 7%, 11%, and 23% respectively when compared to standard U-Net results. AHD of the proposed architecture is slightly better than MultiResUNet and when compared to standard U-Net, it is significantly better for 3 classes (SSAT, DSAT, and VAT). In addition, the model was able to separate SAT into SSAT and DSAT in lean subjects (broken or invisible fascia) and obese subjects (multiple fasciae). As shown in Figure 5, the model was also able to differentiate between VAT and bones, especially in the spine and pelvic regions. Further, MultiRes-Attention U-Net was tested on a new 190 data sets (unseen during training; upper & lower abdomen scans with different resolution) as illustrated in Figure 6 which yielded accurate results for SSAT and DSAT but had few false positives in sacrum region VAT.

Figure 5.

Shows comparison of predicted results of U-Net, MultiResUNet, and MultiRes-attention U-Net (loss function: Focal dice) on low-medium and high-fat subjects.

Figure 6.

Illustration of the predicted result of MultiRes-attention U-Net on a few selected samples of new 190 data sets (unseen during training; upper & lower abdomen scans with different resolution).

Advertisement

5. Conclusion

In this study, we propose MultiRes-Attention U-Net with hybrid loss function for segmentation of superficial and deep subcutaneous adipose tissue (SSAT & DSAT), and visceral adipose tissue (VAT) from abdominal MR scans. MultiRes block, ResAtt-Path, and attention gates can handle shape, scale, and heterogeneity in the abdominal data. Model performance is also dependent on the loss function, especially when there is data imbalance. In this research work, focal dice loss function compared to cross-entropy (CE) loss and dice were found to be more appropriate in improving the overall segmentation results. The proposed pipeline contains pre-processing, data augmentation, and automatic segmentation of fat compartments and fat quantification. The proposed algorithm takes less than 5 s for segmentation and quantification of 3 fat compartments are provided more generalizable results where the model was able to separate SAT into SSAT and DSAT in lean subjects (broken or invisible fascia) and in obese subjects (multiple fasciae) and also differentiate small VAT tissue from bones making it feasible for use in large clinical trials and clinical routine.

References

  1. 1. Web page: who news: https://www.who.int/news-room/fact-sheets/detail/obesity-and-overweight.
  2. 2. Tremmel M, Gerdtham UG, Nilsson PM, Saha S. Economic Burden of Obesity: A Systematic Literature Review. International Journal of Environmental Research and Public Health. 2017 Apr 19;14(4):435. DOI: 10.3390/ijerph14040435
  3. 3. Brons C, Grunnet LG. Mechanisms in endocrinology: Skeletal muscle lipotoxicity in insulin resistance and type 2 diabetes: A causal mechanism or an innocent bystander? European Journal of Endocrinology. 2017;176:R67-R78. DOI: 10.1530/EJE-16-0488
  4. 4. St-Pierre J, Lemieux I, Vohl MC, Perron P, Tremblay G, Despres JP, et al. Contribution of abdominal obesity and hypertriglyceridemia to impaired fasting glucose and coronary artery disease. The American Journal of Cardiology. 2002;90:15-18
  5. 5. Chan JM, Rimm EB, Colditz GA, Stampfer MJ, Willett WC. Obesity, fat distribution, and weight gain as risk factors for clinical diabetes in men. Diabetes Care. 1994;17:961-969
  6. 6. Seabolt LA, Welch EB, Silver HJ. Imaging methods for analyzing body composition in human obesity and cardiometabolic disease. Annals of the New York Academy of Sciences. 2015;1353:41-59. DOI: 10.1111/nyas. 12842
  7. 7. Baum T, Cordes C, Dieckmeyer M, Ruschke S, Franz D, Hauner H, et al. MR-based assessment of body fat distribution and characteristics. European Journal of Radiology. 2016;85:1512-1518. DOI: 10.1016/j.ejrad.2016.02.013
  8. 8. Schar M, Eggers H, Zwart NR, Chang Y, Bakhru A, Pipe JG. Dixon water-fat separation in PROPELLER MRI acquired with two interleaved echoes. Magnetic Resonance in Medicine. 2016;75:718-728. DOI: 10.1002/mrm.25656
  9. 9. Positano V, Gastaldelli A, Sironi AM, Santarelli MF, Lombardi M, Landini L. An accurate and robust method for unsupervised assessment of abdominal fat by MRI. Journal of Magnetic Resonance Imaging. 2004;20:684-689. DOI: 10.1002/jmri.20167
  10. 10. Demerath EW, Ritter KJ, Couch WA, Rogers NL, Moreno GM, Choh A, et al. Validity of a new automated software program for visceral adipose tissue estimation. International Journal of Obesity. 2007;31:285-291
  11. 11. Kullberg J, Angelhed JE, Lonn L, Brandberg J, Ahlstrom H, Frimmel H, et al. Whole-body T1 mapping improves the definition of adipose tissue: Consequences for automated image analysis. Journal of Magnetic Resonance Imaging. 2006;24:394-401. DOI: 10.1002/jmri.20644
  12. 12. Chew J, Yeo A, Yew S, Tan CN, Lim JP, Hafizah Ismail N, et al. Nutrition Mediates the Relationship between Osteosarcopenia and Frailty: A Pathway Analysis. Nutrients. 2020 Sep 27;12(10):2957. DOI: 10.3390/nu12102957
  13. 13. Kn BP, Gopalan V, Lee SS, Velan SS. Quantification of abdominal fat depots in rats and mice during obesity and weight loss interventions. PLoS One. 2014;9:e108979. DOI: 10.1371/journal.pone.0108979
  14. 14. McBee MP, Awan OA, Colucci AT, Ghobadi CW, Kadom N, Kansagra AP, et al. Deep Learning in Radiology. Academic Radiology. 2018 Nov;25(11):1472-1480. DOI: 10.1016/j.acra.2018.02.018
  15. 15. Grainger AT, Krishnaraj A, Quinones MH, Tustison NJ, Epstein S, Fuller D, et al. Deep learning-based quantification of abdominal subcutaneous and visceral fat volume on CT images. Academic Radiology. 2021;28(11):1481-1487. DOI: 10.1016/j.acra.2020.07.010 Epub 2020 Aug 6
  16. 16. Nandakumar G, Srinivasan G, Kim H, Pi J. Comprehensive End-to-End Workflow for Visceral Adipose Tissue and Subcutaneous Adipose Tissue quantification: Use Case to improve MRI accessibility. In: 2020 IEEE 20th International Conference on Bioinformatics and Bioengineering (BIBE), Cincinnati, OH, USA, 2020. pp. 1060-1064. DOI: 10.1109/BIBE50027.2020.00179
  17. 17. Estrada S, Lu R, Conjeti S, Orozco-Ruiz X, Panos-Willuhn J, Breteler MM, et al. FatSegNet: A fully automated deep learning pipeline for adipose tissue segmentation on abdominal Dixon MRI. Magnetic Resonance in Medicine. 2019;83:1471-1483
  18. 18. Nowak S, Theis M, Wichtmann BD, Faron A, Froelich MF, Tollens F, et al. End-to-end automated body composition analyses with integrated quality control for opportunistic assessment of sarcopenia in CT. European Radiology. 2022 May;32(5):3142-3151. DOI: 10.1007/s00330-021-08313-x
  19. 19. Küstner T, Hepp T, Fischer M, Schwartz M, Fritsche A, Häring HU, et al. Fully Automated and Standardized Segmentation of Adipose Tissue Compartments via Deep Learning in 3D Whole-Body MRI of Epidemiologic Cohort Studies. Radiol Artif Intell. 2020 Oct 28;2(6):e200010. DOI: 10.1148/ryai.2020200010
  20. 20. Oktay O, Schlemper J, Folgoc LL, Lee MJ, Heinrich MP, Misawa K, et al. Attention U-Net: Learning where to look for the pancreas. ArXiv abs/1804.03999. 2018
  21. 21. Kafali SG, Shih SF, Li X, Chowdhury S, Loong S, Barnes S, et al. 3D Neural Networks for Visceral and Subcutaneous Adipose Tissue Segmentation using Volumetric Multi-Contrast MRI. Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). 2021 Nov;2021:3933-3937. DOI: 10.1109/EMBC46164.2021.9630110
  22. 22. Ibtehaz N, Rahman MS. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Networks: The Official Journal of the International Neural Network Society. 2020;121:74-87
  23. 23. Bhanu PK, Arvind CS, Yeow LY, Chen WX, Lim WS, Tan CH. CAFT: a deep learning-based comprehensive abdominal fat analysis tool for large cohort studies. MAGMA. 2022 Apr;35(2):205-220. DOI: 10.1007/s10334-021-00946-9
  24. 24. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. ArXiv 1505.04597. 2015
  25. 25. He F, Liu T, Tao D. Why ResNet works? Residuals generalize. IEEE Transactions on Neural Networks and Learning Systems. 2020;31:5349-5362
  26. 26. Cao Y, Liu S, Peng Y, Li J. DenseUNet: Densely connected UNet for electron microscopy image segmentation. IET Image Processing. 2020;14:2682-2689
  27. 27. Sudre CH, Li W, Vercauteren T, Ourselin S, Cardoso MJ. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer; 2017. pp. 240-248
  28. 28. Braiek HB, Khomh F. TFCheck : A TensorFlow Library for Braiek, Houssem Ben and Foutse Khomh. TFCheck : A TensorFlow Library for Detecting Training Issues in Neural Network Programs. In: 2019 IEEE 19th International Conference on Software Quality, Reliability and Security (QRS). 2019. pp. 426-433
  29. 29. Kingma DP, Ba J. Adam: A method for stochastic optimization. ArXiv 1412.6980. 2015
  30. 30. Andreev A, Kirov N. Hausdorff distances for searching in binary text images. Serdica Journal of Computing. 2009;3(1):23-46

Written By

Bhanu K.N. Prakash, Arvind Channarayapatna Srinivasa, Ling Yun Yeow, Wen Xiang Chen, Audrey Jing Ping Yeo, Wee Shiong Lim and Cher Heng Tan

Submitted: 14 February 2023 Reviewed: 06 April 2023 Published: 11 May 2023