Implementasi Ensemble Weighted Voting Pada Arsitektur Densenet Mobilenet Xception Untuk Klasifikasi Penyakit Diabetic Retinopathy
DOI:
https://doi.org/10.36080/idealis.v9i1.3714Keywords:
Diabetic Retinopathy, Ensemble, DenseNet, MobileNet, XceptionAbstract
Convolutional Neural Network (CNN) merupakan salah satu pendekatan deep learning yang banyak digunakan pada tugas klasifikasi dan segmentasi citra, termasuk pada bidang kesehatan. Salah satu penerapan penting CNN adalah pada analisis citra Diabetic Retinopathy (DR), yaitu penyakit pada retina mata yang disebabkan oleh komplikasi diabetes jangka panjang dan dapat menyebabkan gangguan penglihatan hingga kebutaan apabila tidak terdeteksi secara dini. Namun, penggunaan arsitektur CNN tunggal sering mengalami keterbatasan, seperti overfitting, kebutuhan komputasi yang tinggi, atau kemampuan ekstraksi fitur yang belum optimal. Oleh karena itu, metode ensemble dapat digunakan untuk mengombinasikan keunggulan dari beberapa model guna meningkatkan kinerja klasifikasi. Pada penelitian ini diusulkan metode ensemble berbasis weighted voting dengan menggabungkan tiga arsitektur CNN, yaitu DenseNet, MobileNet, dan Xception, untuk klasifikasi biner Diabetic Retinopathy. DenseNet dipilih karena kemampuannya dalam mengekstraksi fitur yang kaya melalui konektivitas antar lapisan, MobileNet dipilih karena efisiensi komputasi dan ukuran model yang ringan, sedangkan Xception digunakan karena kemampuannya menyeimbangkan kedalaman jaringan dan efisiensi komputasi melalui depthwise separable convolution. Tahapan penelitian meliputi pengumpulan data, pelatihan model, pengujian, serta evaluasi kinerja. Dataset EyePACS digunakan sebagai data pelatihan, sedangkan dataset APTOS dimanfaatkan sebagai data pengujian untuk menguji kemampuan generalisasi model. Hasil eksperimen menunjukkan bahwa metode ensemble yang diusulkan menghasilkan kinerja yang baik dengan nilai akurasi sebesar 85,22%, sensitivitas 70,63%, spesifisitas 99,40%, F1-score 87,21%, serta nilai Cohen’s Kappa sebesar 0,7032. Hasil ini menunjukkan bahwa pendekatan ensemble mampu meningkatkan kinerja klasifikasi dan mengurangi permasalahan overfitting dibandingkan model CNN tunggal, serta berpotensi dikembangkan sebagai sistem pendukung keputusan untuk skrining otomatis Diabetic Retinopathy.
Downloads
References
[1] S. Agarwal and A. Bhat, “A survey on recent developments in diabetic retinopathy detection through integration of deep learning,” Multimed. Tools Appl., vol. 82, no. 11, pp. 17321–17351, 2023.
[2] I. R. I. Haque and J. Neubert, “Deep learning approaches to biomedical image segmentation,” Informatics Med. Unlocked, vol. 18, p. 100297, 2020.
[3] C. Liu, P. Gu, and Z. Xiao, “Multiscale U-Net with Spatial Positional Attention for Retinal Vessel Segmentation,” J. Healthc. Eng., vol. 2022, no. 1, p. 5188362, 2022.
[4] S. Kaur and K. S. Mann, “Retinal vessel segmentation using an entropy-based optimization algorithm,” Int. J. Healthc. Inf. Syst. Informatics, vol. 15, no. 2, pp. 61–79, 2020.
[5] O. M. Al-hazaimeh, A. A. Abu-Ein, N. M. Tahat, M. A. Al-Smadi, and M. M. Al-Nawashi, “Combining Artificial Intelligence and Image Processing for Diagnosing Diabetic Retinopathy in Retinal Fundus Images.,” Int. J. Online Biomed. Eng., vol. 18, no. 13, 2022.
[6] A. Samanta, A. Saha, S. C. Satapathy, S. L. Fernandes, and Y.-D. Zhang, “Automated detection of diabetic retinopathy using convolutional neural networks on a small dataset,” Pattern Recognit. Lett., vol. 135, pp. 293–298, 2020.
[7] G. Ali, A. Dastgir, M. W. Iqbal, M. Anwar, and M. Faheem, “A hybrid convolutional neural network model for automatic diabetic retinopathy classification from fundus images,” IEEE J. Transl. Eng. Heal. Med., vol. 11, pp. 341–350, 2023.
[8] Q. H. Nguyen et al., “Diabetic retinopathy detection using deep learning,” in Proceedings of the 4th international conference on machine learning and soft computing, 2020, pp. 103–107.
[9] R. Biswas, A. Vasan, and S. S. Roy, “Dilated deep neural network for segmentation of retinal blood vessels in fundus images,” Iran. J. Sci. Technol. Trans. Electr. Eng., vol. 44, no. 1, pp. 505–518, 2020.
[10] R. Sulthana, V. Chamola, Z. Hussain, F. Albalwy, and A. Hussain, “A novel end-to-end deep convolutional neural network based skin lesion classification framework,” Expert Syst. Appl., vol. 246, p. 123056, 2024.
[11] X. Zhao, L. Wang, Y. Zhang, X. Han, M. Deveci, and M. Parmar, “A review of convolutional neural networks in computer vision,” Artif. Intell. Rev., vol. 57, no. 4, p. 99, 2024.
[12] M. Rao, M. Zhu, and T. Wang, “Conversion and implementation of state-of-the-art deep learning algorithms for the classification of diabetic retinopathy,” arXiv Prepr. arXiv2010.11692, 2020.
[13] A. Souid, N. Sakli, and H. Sakli, “Classification and predictions of lung diseases from chest x-rays using mobilenet v2,” Appl. Sci., vol. 11, no. 6, p. 2751, 2021.
[14] Z. Zhong, M. Zheng, H. Mai, J. Zhao, and X. Liu, “Cancer image classification based on DenseNet model,” in Journal of physics: conference series, 2020, p. 12143.
[15] I. Kandel and M. Castelli, “Transfer learning with convolutional neural networks for diabetic retinopathy image classification. A review,” Appl. Sci., vol. 10, no. 6, p. 2021, 2020.
[16] A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi, “A survey of the recent architectures of deep convolutional neural networks,” Artif. Intell. Rev., vol. 53, no. 8, pp. 5455–5516, 2020.
[17] A. Derry, M. Krzywinski, and N. Altman, “Convolutional neural networks,” 2023, Nature Publishing Group US New York.
[18] S. Albahli, T. Nazir, A. Irtaza, and A. Javed, “Recognition and Detection of Diabetic Retinopathy Using Densenet-65 Based Faster-RCNN.,” Comput. Mater. Contin., vol. 67, no. 2, 2021.
[19] N. Hasan, Y. Bao, A. Shawon, and Y. Huang, “DenseNet convolutional neural networks application for predicting COVID-19 using CT image,” SN Comput. Sci., vol. 2, no. 5, p. 389, 2021.
[20] S. Mishra, S. Hanchate, and Z. Saquib, “Diabetic retinopathy detection using deep learning,” in 2020 International conference on smart technologies in computing, electrical and electronics (ICSTCEE), 2020, pp. 515–520.
[21] H. Liu, K. Yue, S. Cheng, C. Pan, J. Sun, and W. Li, “Hybrid model structure for diabetic retinopathy classification,” J. Healthc. Eng., vol. 2020, no. 1, p. 8840174, 2020.
[22] S. Zhu, C. Xiong, Q. Zhong, and Y. Yao, “Diabetic retinopathy classification with deep learning via fundus images: A short survey,” IEEE Access, vol. 12, pp. 20540–20558, 2024.
[23] S. Bagui et al., “Effect of hold-type on cyclic life and microstructural evolution of an austenitic stainless steel,” Materialia, vol. 37, p. 102211, 2024.
[24] T. Yumeng, C. Lina, and others, “Pneumonia Detection in chest X-rays: A deep learning approach based on ensemble RetinaNet and Mask R-CNN,” in 2020 Eighth International Conference on Advanced Cloud and Big Data (CBD), 2020, pp. 213–218.
[25] B. K. Chaurasia, H. Raj, S. S. Rathour, and P. B. Singh, “Transfer learning--driven ensemble model for detection of diabetic retinopathy disease,” Med. Biol. Eng. Comput., vol. 61, no. 8, pp. 2033–2049, 2023.
[26] G. Jinfeng, S. Qummar, Z. Junming, Y. Ruxian, and F. G. Khan, “Ensemble framework of deep CNNs for diabetic retinopathy detection,” Comput. Intell. Neurosci., vol. 2020, no. 1, p. 8864698, 2020.
[27] C. Zhang, T. Lei, and P. Chen, “Diabetic retinopathy grading by a source-free transfer learning approach,” Biomed. Signal Process. Control, vol. 73, p. 103423, 2022.
[28] R. Wardoyo, A. Musdholifah, G. A. Pradipta, and I. N. H. Sanjaya, “Weighted majority voting by statistical performance analysis on ensemble multiclassifier,” in 2020 Fifth International Conference on Informatics and Computing (ICIC), 2020, pp. 1–8.
[29] W. Yu, S. Li, T. Ye, R. Xu, J. Song, and Y. Guo, “Deep ensemble machine learning framework for the estimation of PM2. 5 concentrations,” Environ. Health Perspect., vol. 130, no. 3, p. 37004, 2022.
[30] D. Yang, G. Liu, M. Ren, B. Xu, and J. Wang, “A multi-scale feature fusion method based on u-net for retinal vessel segmentation,” Entropy, vol. 22, no. 8, p. 811, 2020.
[31] R. Shang, J. He, J. Wang, K. Xu, L. Jiao, and R. Stolkin, “Dense connection and depthwise separable convolution based CNN for polarimetric SAR image classification,” Knowledge-Based Syst., vol. 194, p. 105542, 2020.
[32] W. K. Wong, F. H. Juwono, and B. T. T. Khoo, “Multi-features capacitive hand gesture recognition sensor: A machine learning approach,” IEEE Sens. J., vol. 21, no. 6, pp. 8441–8450, 2021.
[33] K. Van Krieken and J. Sanders, “What is narrative journalism? A systematic review and an empirical agenda,” Journalism, vol. 22, no. 6, pp. 1393–1412, 2021.
[34] L. I. Kesuma and M. I. Anggara, “Implementation of Image Quality Improvement Methods and Lung Segmentation on Chest X-Ray Images Using U-Net Architectural Modifications,” Comput. Eng. Appl. J., vol. 12, no. 2, pp. 71–78, 2023.
[35] G. C. Cardarilli et al., “A pseudo-softmax function for hardware-based high speed image classification,” Sci. Rep., vol. 11, no. 1, p. 15307, 2021.
[36] C. Chen, J. H. Chuah, R. Ali, and Y. Wang, “Retinal vessel segmentation using deep learning: a review,” IEEE Access, vol. 9, pp. 111985–112004, 2021.
[37] Y. Ye et al., “Fully-automated segmentation of nasopharyngeal carcinoma on dual-sequence MRI using convolutional neural networks,” Front. Oncol., vol. 10, p. 166, 2020.
[38] S. Jadon, “A survey of loss functions for semantic segmentation,” in 2020 IEEE conference on computational intelligence in bioinformatics and computational biology (CIBCB), 2020, pp. 1–7.
[39] K. Teeyapan, “Abnormality detection in musculoskeletal radiographs using EfficientNets,” in 2020 24th international computer science and engineering conference (ICSEC), 2020, pp. 1–6.
[40] F. Stahlberg, “Neural machine translation: A review,” J. Artif. Intell. Res., vol. 69, pp. 343–418, 2020.
[41] Y. Zuo, Q. Zou, J. Lin, M. Jiang, and X. Liu, “2lpiRNApred: a two-layered integrated algorithm for identifying piRNAs and their functions based on LFE-GM feature selection,” RNA Biol., vol. 17, no. 6, pp. 892–902, 2020.
[42] T. Schopf, D. Braun, and F. Matthes, “Evaluating unsupervised text classification: zero-shot and similarity-based approaches,” in Proceedings of the 2022 6th International Conference on Natural Language Processing and Information Retrieval, 2022, pp. 6–15.
[43] Y. Ma, S. Liu, Z. Li, and J. Sun, “Iqdet: Instance-wise quality distribution sampling for object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1717–1725.
[44] M. M. Musthafa, M. TR, V. K. V, and S. Guluwadi, “Enhanced skin cancer diagnosis using optimized CNN architecture and checkpoints for automated dermatological lesion classification,” BMC Med. Imaging, vol. 24, no. 1, p. 201, 2024.
[45] D. J. Hand, P. Christen, and S. Ziyad, “Selecting a classification performance measure: matching the measure to the problem,” arXiv Prepr. arXiv2409.12391, 2024.
[46] D. Chicco, M. J. Warrens, and G. Jurman, “The Matthews correlation coefficient (MCC) is more informative than Cohen’s Kappa and Brier score in binary classification assessment,” Ieee Access, vol. 9, pp. 78368–78381, 2021.
[47] S. S. Mullick, S. Datta, S. G. Dhekane, and S. Das, “Appropriateness of performance indices for imbalanced data classification: An analysis,” Pattern Recognit., vol. 102, p. 107197, 2020.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Lucky Indra Kesuma, Des Alwine Zayanti, Anita Desiani, Purwita Sari, Zulhipni Reno Saputra, Muhammad Ihsan, Fathona Nur Muzayyadah

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.











