Brain tumor segmentation is a challenging problem in medical image analysis. The endpoint is to generate the salient masks that accurately identify brain tumor regions in an fMRI screening. In this paper, we propose a novel attention gate (AG model) for brain tumor segmentation that utilizes both the edge detecting unit and the attention gated network to highlight and segment the salient regions from fMRI images. This feature enables us to eliminate the necessity of having to explicitly point to-wards the damaged area (external tissue localization) and classify (classification) as per classical computer vision techniques. In order to provide the useful constraints to guide feature extraction, we incoorporate the edge attention-gated unit. The explicit edge-attention unit is devoted to model the image boundaries as well as enhancing the representation. AGs can easily be integrated within the deep convolutional neural networks (CNNs). Minimal computional overhead is required while the AGs increase the sensitivity scores significantly. We show that the edge detector along with an attention gated mechanism provide a suffcient enough method for brain segmentation reaching an IOU of 0.78. With this methodology, we attempt to bring deep learning closer to the hands of human level performance providing useful information to the process of diagnosis.
Published in | American Journal of Artificial Intelligence (Volume 6, Issue 1) |
DOI | 10.11648/j.ajai.20220601.14 |
Page(s) | 27-30 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2022. Published by Science Publishing Group |
fMRI, Brain Tumor Seg-mentation, CNNs, Attention Gates, Au-toencoder, Segmentation, Biomedical Image Analysis
[1] | Glover GH. Overview of functional magnetic resonance imaging. Neurosurg Clin N Am. 2011; 22 (2): 133-vii. doi: 10.1016/j.nec.2010.11.001. |
[2] | Bannigan P., Aldeghi M., Bao Z., Häse F., Aspuru-Guzik A., Allen C.: Machine learning direct drug formulation development; Advanced Drug Delivery Reviews, Volume 175, 2021. |
[3] | Mateusz Buda, AshirbaniSaha, Maciej A. Mazurowski “Association of genomic subtypes of lower-grade gliomas with shape features automatically ex-tracted by a deep learning algorithm.” Computers in Biology and Medicine, 2019. image acquisition. |
[4] | Maciej A. Mazurowski, Kal Clark, Nicholas M. Czarnek, Parisa Shamses-fandabadi, Katherine B. Peters, Ashirbani Saha “Radiogenomics of lower-grade glioma: algorithmically-assessed tumor shape is associated with tumor genomic subtypes and patient outcomes in a multi-institutional study with The Cancer Genome Atlas data.” Journal of Neuro-Oncology, 2017. |
[5] | Tatman, R. (2017, November). R vs. Python: The Kitchen Gadget Test, Version 1. Retrieved December 20, 2017 from https://www.kaggle.com/rtatman/r-vs-python-the-kitchen-gadget-test. |
[6] | Cardona, A., et al.: An integrated micro- and macroarchitectural analysis of the drosophila brain by computer-assisted serial section electron microscopy. PLoS Biol. 8 (10), e1000502 (2010) Google Scholar. |
[7] | Ciresan, D. C., Gambardella, L. M., Giusti, A., Schmidhuber, J.: Deep neural networks segment neuronal membranes in electron microscopy images. In: NIPS, pp. 2852–2860 (2012) Google Scholar. |
[8] | Dosovitskiy, A., Springenberg, J. T., Riedmiller, M., Brox, T.: Discriminative unsupervised feature learning with convolutional neural networks. In: NIPS (2014) Google Scholar. |
[9] | He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), arXiv: 1502.01852 [cs.CV] Google Scholar. |
[10] | Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding (2014), arXiv: 1408.5093 [cs.CV]. |
[11] | Krizhevsky, A., Sutskever, I., Hinton, G. E.: Imagenet classification with deep convolutional neural networks. In: NIPS, pp. 1106–1114 (2012). |
[12] | LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D.: Backpropagation applied to handwritten zip code recognition. Neural Computation 1 (4), 541–551 (1989). |
[13] | Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation (2014), arXiv: 1411.4038 [cs.CV] Google Scholar. |
[14] | Maska, M., et al.: A benchmark for comparison of cell tracking algorithms. Bioinformatics 30, 1609–1617 (2014). |
[15] | Seyedhosseini, M., Sajjadi, M., Tasdizen, T.: Image segmentation with cascaded hierarchical models and logistic disjunctive normal networks. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 2168–2175 (2013) Google Scholar. |
[16] | Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014), arXiv: 1409.1556 [cs.CV] Google Scholar. |
APA Style
Tim Cvetko. (2022). AGD-Autoencoder: Attention Gated Deep Convolutional Autoencoder for Brain Tumor Segmentation. American Journal of Artificial Intelligence, 6(1), 27-30. https://doi.org/10.11648/j.ajai.20220601.14
ACS Style
Tim Cvetko. AGD-Autoencoder: Attention Gated Deep Convolutional Autoencoder for Brain Tumor Segmentation. Am. J. Artif. Intell. 2022, 6(1), 27-30. doi: 10.11648/j.ajai.20220601.14
@article{10.11648/j.ajai.20220601.14, author = {Tim Cvetko}, title = {AGD-Autoencoder: Attention Gated Deep Convolutional Autoencoder for Brain Tumor Segmentation}, journal = {American Journal of Artificial Intelligence}, volume = {6}, number = {1}, pages = {27-30}, doi = {10.11648/j.ajai.20220601.14}, url = {https://doi.org/10.11648/j.ajai.20220601.14}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20220601.14}, abstract = {Brain tumor segmentation is a challenging problem in medical image analysis. The endpoint is to generate the salient masks that accurately identify brain tumor regions in an fMRI screening. In this paper, we propose a novel attention gate (AG model) for brain tumor segmentation that utilizes both the edge detecting unit and the attention gated network to highlight and segment the salient regions from fMRI images. This feature enables us to eliminate the necessity of having to explicitly point to-wards the damaged area (external tissue localization) and classify (classification) as per classical computer vision techniques. In order to provide the useful constraints to guide feature extraction, we incoorporate the edge attention-gated unit. The explicit edge-attention unit is devoted to model the image boundaries as well as enhancing the representation. AGs can easily be integrated within the deep convolutional neural networks (CNNs). Minimal computional overhead is required while the AGs increase the sensitivity scores significantly. We show that the edge detector along with an attention gated mechanism provide a suffcient enough method for brain segmentation reaching an IOU of 0.78. With this methodology, we attempt to bring deep learning closer to the hands of human level performance providing useful information to the process of diagnosis.}, year = {2022} }
TY - JOUR T1 - AGD-Autoencoder: Attention Gated Deep Convolutional Autoencoder for Brain Tumor Segmentation AU - Tim Cvetko Y1 - 2022/04/22 PY - 2022 N1 - https://doi.org/10.11648/j.ajai.20220601.14 DO - 10.11648/j.ajai.20220601.14 T2 - American Journal of Artificial Intelligence JF - American Journal of Artificial Intelligence JO - American Journal of Artificial Intelligence SP - 27 EP - 30 PB - Science Publishing Group SN - 2639-9733 UR - https://doi.org/10.11648/j.ajai.20220601.14 AB - Brain tumor segmentation is a challenging problem in medical image analysis. The endpoint is to generate the salient masks that accurately identify brain tumor regions in an fMRI screening. In this paper, we propose a novel attention gate (AG model) for brain tumor segmentation that utilizes both the edge detecting unit and the attention gated network to highlight and segment the salient regions from fMRI images. This feature enables us to eliminate the necessity of having to explicitly point to-wards the damaged area (external tissue localization) and classify (classification) as per classical computer vision techniques. In order to provide the useful constraints to guide feature extraction, we incoorporate the edge attention-gated unit. The explicit edge-attention unit is devoted to model the image boundaries as well as enhancing the representation. AGs can easily be integrated within the deep convolutional neural networks (CNNs). Minimal computional overhead is required while the AGs increase the sensitivity scores significantly. We show that the edge detector along with an attention gated mechanism provide a suffcient enough method for brain segmentation reaching an IOU of 0.78. With this methodology, we attempt to bring deep learning closer to the hands of human level performance providing useful information to the process of diagnosis. VL - 6 IS - 1 ER -