| Peer-Reviewed

Finding the Best Performing Pre-Trained CNN Model for Image Classification: Using a Class Activation Map to Spot Abnormal Parts in Diabetic Retinopathy Image

Received: 22 June 2021     Accepted: 5 July 2021     Published: 10 July 2021
Views:       Downloads:
Abstract

Diabetic retinopathy (DR) is a common eye disease that people get from diabetes. About 33.7% of the people with diabetes have DR. With our datas, which are pictures of the eyeball with and without DR, we tried different convolutional neural network (CNN) models to get the best accuracy score. We tested our datas with a default CNN model, and 5 different pre-trained models: MobileNet, VGG16, VGG19, Inception V3, and Inception ResNet V2. The default CNN model didn’t perform very well, getting only 10.4%. The pre-trained model also didn’t perform as good as expected, so we decided to use GRU with the models, which increases the score. For the higher accuracy, we added bidirectional GRU to train the whole parameters in the model. The 5 different pre-trained models scored an average of 74.2% accuracy score, and Inception ResNet V2 with bidirectional GRU included scored the highest accuracy, achieving 83.57%. For additional study, we used a class activation map to spot the abnormal parts of the eyeball with DR, and we could spot abnormal veins and bleeding on the eyeball. However, our research has limitations on that we did not use the segmentation methods, which is more advanced technique compared to classification, such as U-net, Fully Convolutional Network (FCN), Deep Lab V3, and Feature Pyramid Network. Furthermore, even though our model classified 5 different classes, the fact that the highest accuracy score was lower than 90% is also a limitation. For further study, we would prepare a masking method for applying segmentation methods to our dataset.

Published in American Journal of Biomedical and Life Sciences (Volume 9, Issue 4)
DOI 10.11648/j.ajbls.20210904.11
Page(s) 176-181
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2021. Published by Science Publishing Group

Keywords

Diabetes, Diabetic Retinopathy, Inception-Resnet-v2, Bi-GRU, CNN

References
[1] Diabetic retinopathy. (n.d.). https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases/diabetic-retinopathy.
[2] Gargeya, R., & Leng, T. (2017). Automated Identification of Diabetic Retinopathy Using Deep Learning. Ophthalmology, 124 (7), 962–969. https://doi.org/10.1016/j.ophtha.2017.02.008.
[3] Qummar, S., Khan, F. G., Shah, S., Khan, A., Shamshirband, S., Rehman, Z. U.,… Jadoon, W. (2019). A Deep Learning Ensemble Approach for Diabetic Retinopathy Detection. IEEE Access, 7, 150530–150539. https://doi.org/10.1109/access.2019.2947484.
[4] Kwasigroch, A., Jarzembinski, B., & Grochowski, M. (2018). Deep CNN based decision support system for detection and assessing the stage of diabetic retinopathy. 2018 International Interdisciplinary PhD Workshop (IIPhDW). https://doi.org/10.1109/iiphdw.2018.8388337.
[5] Ghosh, R., Ghosh, K., & Maitra, S. (2017). Automatic detection and classification of diabetic retinopathy stages using CNN. 2017 4th International Conference on Signal Processing and Integrated Networks (SPIN). https://doi.org/10.1109/spin.2017.8050011.
[6] Qomariah, D. U., Tjandrasa, H., & Fatichah, C. (2019). Classification of Diabetic Retinopathy and Normal Retinal Images using CNN and SVM. 2019 12th International Conference on Information & Communication Technology and System (ICTS). https://doi.org/10.1109/icts.2019.8850940.
[7] Mobeen-ur-Rehman, Khan, S. H., Abbas, Z., & Danish Rizvi, S. M. (2019). Classification of Diabetic Retinopathy Images Based on Customised CNN Architecture. 2019 Amity International Conference on Artificial Intelligence (AICAI). https://doi.org/10.1109/aicai.2019.8701231.
[8] Rath, S. R. (2020, February 18). Diabetic Retinopathy 224x224 Gaussian Filtered. Kaggle. https://www.kaggle.com/sovitrath/diabetic-retinopathy-224x224-gaussian-filtered.
[9] Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv: 1412.3555.
[10] Albawi, S., Mohammed, T. A., & Al-Zawi, S. (2017). Understanding of a convolutional neural network. 2017 International Conference on Engineering and Technology (ICET). https://doi.org/10.1109/icengtechnol.2017.8308186.
[11] Abdullah, & Hasan, M. S. (2017). An application of pre-trained CNN for image classification. 2017 20th International Conference of Computer and Information Technology (ICCIT). https://doi.org/10.1109/iccitechn.2017.8281779.
[12] Vukotić, V., Raymond, C., & Gravier, G. (2016). A Step Beyond Local Observations with a Dialog Aware Bidirectional GRU Network for Spoken Language Understanding. Interspeech 2016. https://doi.org/10.21437/interspeech.2016-1301.
[13] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).
[14] Kwasniewska, A., Ruminski, J., & Rad, P. (2017). Deep features class activation map for thermal face detection and tracking. 2017 10th International Conference on Human System Interactions (HSI). https://doi.org/10.1109/hsi.2017.8004993.
[15] Wu, L., Kong, C., Hao, X., & Chen, W. (2020). A Short-Term Load Forecasting Method Based on GRU-CNN Hybrid Neural Network Model. Mathematical Problems in Engineering, 2020, 1–10. https://doi.org/10.1155/2020/1428104.
Cite This Article
  • APA Style

    Jihyung Kim. (2021). Finding the Best Performing Pre-Trained CNN Model for Image Classification: Using a Class Activation Map to Spot Abnormal Parts in Diabetic Retinopathy Image. American Journal of Biomedical and Life Sciences, 9(4), 176-181. https://doi.org/10.11648/j.ajbls.20210904.11

    Copy | Download

    ACS Style

    Jihyung Kim. Finding the Best Performing Pre-Trained CNN Model for Image Classification: Using a Class Activation Map to Spot Abnormal Parts in Diabetic Retinopathy Image. Am. J. Biomed. Life Sci. 2021, 9(4), 176-181. doi: 10.11648/j.ajbls.20210904.11

    Copy | Download

    AMA Style

    Jihyung Kim. Finding the Best Performing Pre-Trained CNN Model for Image Classification: Using a Class Activation Map to Spot Abnormal Parts in Diabetic Retinopathy Image. Am J Biomed Life Sci. 2021;9(4):176-181. doi: 10.11648/j.ajbls.20210904.11

    Copy | Download

  • @article{10.11648/j.ajbls.20210904.11,
      author = {Jihyung Kim},
      title = {Finding the Best Performing Pre-Trained CNN Model for Image Classification: Using a Class Activation Map to Spot Abnormal Parts in Diabetic Retinopathy Image},
      journal = {American Journal of Biomedical and Life Sciences},
      volume = {9},
      number = {4},
      pages = {176-181},
      doi = {10.11648/j.ajbls.20210904.11},
      url = {https://doi.org/10.11648/j.ajbls.20210904.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajbls.20210904.11},
      abstract = {Diabetic retinopathy (DR) is a common eye disease that people get from diabetes. About 33.7% of the people with diabetes have DR. With our datas, which are pictures of the eyeball with and without DR, we tried different convolutional neural network (CNN) models to get the best accuracy score. We tested our datas with a default CNN model, and 5 different pre-trained models: MobileNet, VGG16, VGG19, Inception V3, and Inception ResNet V2. The default CNN model didn’t perform very well, getting only 10.4%. The pre-trained model also didn’t perform as good as expected, so we decided to use GRU with the models, which increases the score. For the higher accuracy, we added bidirectional GRU to train the whole parameters in the model. The 5 different pre-trained models scored an average of 74.2% accuracy score, and Inception ResNet V2 with bidirectional GRU included scored the highest accuracy, achieving 83.57%. For additional study, we used a class activation map to spot the abnormal parts of the eyeball with DR, and we could spot abnormal veins and bleeding on the eyeball. However, our research has limitations on that we did not use the segmentation methods, which is more advanced technique compared to classification, such as U-net, Fully Convolutional Network (FCN), Deep Lab V3, and Feature Pyramid Network. Furthermore, even though our model classified 5 different classes, the fact that the highest accuracy score was lower than 90% is also a limitation. For further study, we would prepare a masking method for applying segmentation methods to our dataset.},
     year = {2021}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Finding the Best Performing Pre-Trained CNN Model for Image Classification: Using a Class Activation Map to Spot Abnormal Parts in Diabetic Retinopathy Image
    AU  - Jihyung Kim
    Y1  - 2021/07/10
    PY  - 2021
    N1  - https://doi.org/10.11648/j.ajbls.20210904.11
    DO  - 10.11648/j.ajbls.20210904.11
    T2  - American Journal of Biomedical and Life Sciences
    JF  - American Journal of Biomedical and Life Sciences
    JO  - American Journal of Biomedical and Life Sciences
    SP  - 176
    EP  - 181
    PB  - Science Publishing Group
    SN  - 2330-880X
    UR  - https://doi.org/10.11648/j.ajbls.20210904.11
    AB  - Diabetic retinopathy (DR) is a common eye disease that people get from diabetes. About 33.7% of the people with diabetes have DR. With our datas, which are pictures of the eyeball with and without DR, we tried different convolutional neural network (CNN) models to get the best accuracy score. We tested our datas with a default CNN model, and 5 different pre-trained models: MobileNet, VGG16, VGG19, Inception V3, and Inception ResNet V2. The default CNN model didn’t perform very well, getting only 10.4%. The pre-trained model also didn’t perform as good as expected, so we decided to use GRU with the models, which increases the score. For the higher accuracy, we added bidirectional GRU to train the whole parameters in the model. The 5 different pre-trained models scored an average of 74.2% accuracy score, and Inception ResNet V2 with bidirectional GRU included scored the highest accuracy, achieving 83.57%. For additional study, we used a class activation map to spot the abnormal parts of the eyeball with DR, and we could spot abnormal veins and bleeding on the eyeball. However, our research has limitations on that we did not use the segmentation methods, which is more advanced technique compared to classification, such as U-net, Fully Convolutional Network (FCN), Deep Lab V3, and Feature Pyramid Network. Furthermore, even though our model classified 5 different classes, the fact that the highest accuracy score was lower than 90% is also a limitation. For further study, we would prepare a masking method for applying segmentation methods to our dataset.
    VL  - 9
    IS  - 4
    ER  - 

    Copy | Download

Author Information
  • Walter Johnson High School, North Bethesda, Maryland, USA

  • Sections