##plugins.themes.bootstrap3.article.main##

Vandana Malik
A. J Singh

Abstract

Mitochondria is an essential cell organelle with varying shape and size. A slight change in mitochondrial morphology leads to neurodegenerative diseases. The advanced deep learning-based models like U-Net, Mark R-CNN, MitoNet, MitoStructSeg, MitoSkel perform accurate mitochondrial image analysis by performing image segmentation or morphological quantification but are devoid of the ability to interpret the results produced. This research work proposed a novel unified XM-DL framework (Explainable Mitochondrial Deep Learning Based Framework) capable of performing multiple tasks like image segmentation, morphological quantification, classification of mitochondria on the basis of their shape, and interpreting results by using explainable artificial intelligence (XAI) techniques as a single pipeline. The XM-DL framework is composed of U-Net architecture integrated with residual connections, skip connections, and attention gates for performing image segmentation, followed by a post processing module for morphological quantification and utilizing Gradient Class Activation Mapping (Grad-CAM) as explainable AI and form a unique pipeline.  The XM-DL framework was trained on the MitoEM dataset and achieved a high F1 score of 0.9322 and IoU (intersection over union) of 0.8793 for image segmentation task. The XM-DL framework provides assistance to the medical service providers by improving the interpretability and understanding about the deep learning techniques.

##plugins.themes.bootstrap3.article.details##

How to Cite
Malik, V., & Singh, A. J. (2026). Explainable Mitochondrial Image Segmentation and Morphological Quantification using Deep Learning Based Framework. International Journal of Basic and Applied Science, 14(4), 152–161. https://doi.org/10.35335/ijobas.v14i4.845
References
[1] M. Changaei et al., “From powerhouse to modulator: regulating immune system responses through intracellular mitochondrial transfer,” Cell Commun Signal, vol. 23, no. 1, p. 232, May 2025, doi: 10.1186/s12964-025-02237-5.
[2] “Deep Analysis of Mitochondria and Cell Health Using Machine Learning | Scientific Reports.” Accessed: Dec. 17, 2024. [Online]. Available: https://www.nature.com/articles/s41598-018-34455-y
[3] B. C. Jenkins et al., “Mitochondria in disease: changes in shapes and dynamics,” Trends in Biochemical Sciences, vol. 49, no. 4, pp. 346–360, Apr. 2024, doi: 10.1016/j.tibs.2024.01.011.
[4] B. Abhisheka, S. K. Biswas, B. Purkayastha, D. Das, and A. Escargueil, “Recent trend in medical imaging modalities and their applications in disease diagnosis: a review,” Multimed Tools Appl, vol. 83, no. 14, pp. 43035–43070, Oct. 2023, doi: 10.1007/s11042-023-17326-1.
[5] S. Zaghbani, R. K. Pranti, L. Faber, and A. J. Garcia-Saez, “MitoSkel: AI tool for semantic segmentation and quantification of mitochondria from light microscopy images,” Biomedical Signal Processing and Control, vol. 106, p. 107762, Aug. 2025, doi: 10.1016/j.bspc.2025.107762.
[6] A. Chaddad, Y. Hu, Y. Wu, B. Wen, and R. Kateb, “Generalizable and explainable deep learning for medical image computing: An overview,” Current Opinion in Biomedical Engineering, vol. 33, p. 100567, Mar. 2025, doi: 10.1016/j.cobme.2024.100567.
[7] R. Gipiškis, C.-W. Tsai, and O. Kurasova, “Explainable AI (XAI) in image segmentation in medicine, industry, and beyond: A survey,” ICT Express, vol. 10, no. 6, pp. 1331–1354, Dec. 2024, doi: 10.1016/j.icte.2024.09.008.
[8] D. Cheng, Z. Zhou, H. Li, J. Zhang, and Y. Yang, “A morphological difference and statistically sparse transformer-based deep neural network for medical image segmentation,” Applied Soft Computing, vol. 174, p. 113052, Apr. 2025, doi: 10.1016/j.asoc.2025.113052.
[9] J. Liu, W. Li, C. Xiao, B. Hong, Q. Xie, and H. Han, “Automatic Detection and Segmentation of Mitochondria from SEM Images using Deep Neural Network,” in 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jul. 2018, pp. 628–631. doi: 10.1109/EMBC.2018.8512393.
[10] Z. Yuan, X. Ma, J. Yi, Z. Luo, and J. Peng, “HIVE-Net: Centerline-aware hierarchical view-ensemble convolutional network for mitochondria segmentation in EM images,” Computer Methods and Programs in Biomedicine, vol. 200, p. 105925, Mar. 2021, doi: 10.1016/j.cmpb.2020.105925.
[11] A. Khadangi, T. Boudier, and V. Rajagopal, “EM-net: Deep learning for electron microscopy image segmentation,” in 2020 25th International Conference on Pattern Recognition (ICPR), Jan. 2021, pp. 31–38. doi: 10.1109/ICPR48806.2021.9413098.
[12] X. Chai, Q. Ba, and G. Yang, “Characterizing robustness and sensitivity of convolutional neural networks for quantitative analysis of mitochondrial morphology,” Quantitative Biology, vol. 6, no. 4, pp. 344–358, 2018, doi: 10.1007/s40484-018-0156-3.
[13] R. Conrad and K. Narayan, “Instance segmentation of mitochondria in electron microscopy images with a generalist deep learning model trained on a diverse dataset,” Cell Systems, vol. 14, no. 1, pp. 58-71.e5, Jan. 2023, doi: 10.1016/j.cels.2022.12.006.
[14] S. Suga, K. Nakamura, Y. Nakanishi, B. M. Humbel, H. Kawai, and Y. Hirabayashi, “An interactive deep learning-based approach reveals mitochondrial cristae topologies,” PLOS Biology, vol. 21, no. 8, p. e3002246, Aug. 2023, doi: 10.1371/journal.pbio.3002246.
[15] Y. Ding et al., “Mitochondrial segmentation and function prediction in live-cell images with deep learning,” Nat Commun, vol. 16, no. 1, p. 743, Jan. 2025, doi: 10.1038/s41467-025-55825-x.
[16] X. Wang et al., “MitoStructSeg: A Comprehensive Platform for Mitochondrial Structure Segmentation and Analysis,” Jul. 02, 2024, bioRxiv. doi: 10.1101/2024.06.28.601295.
[17] V. Tucci, J. Saary, and T. E. Doyle, “Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review,” Journal of Medical Artificial Intelligence, vol. 5, no. 0, Mar. 2022, doi: 10.21037/jmai-21-25.
[18] A. Chaddad, Y. Hu, Y. Wu, B. Wen, and R. Kateb, “Generalizable and Explainable Deep Learning for Medical Image Computing: An Overview,” Current Opinion in Biomedical Engineering, vol. 33, p. 100567, Mar. 2025, doi: 10.1016/j.cobme.2024.100567.
[19] J. Narkhede, “Comparative Evaluation of Post-Hoc Explainability Methods in AI: LIME, SHAP, and Grad-CAM,” in 2024 4th International Conference on Sustainable Expert Systems (ICSES), Oct. 2024, pp. 826–830. doi: 10.1109/ICSES63445.2024.10762963.
[20] “MitoEM dataset: large-scale 3d mitochondria instance segmentation (Paper) — BiaPy documentation.” Accessed: Nov. 17, 2025. [Online]. Available: https://biapy.readthedocs.io/en/latest/tutorials/instance_seg/mitoem.html
[21] R. Wang, T. Lei, R. Cui, B. Zhang, H. Meng, and A. K. Nandi, “Medical image segmentation using deep learning: A survey,” IET Image Processing, vol. 16, no. 5, pp. 1243–1267, Apr. 2022, doi: 10.1049/ipr2.12419.
[22] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds., Cham: Springer International Publishing, 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28.
[23] T. Hussain and H. Shouno, “MAGRes-UNet: Improved Medical Image Segmentation Through a Deep Learning Paradigm of Multi-Attention Gated Residual U-Net,” IEEE Access, vol. 12, pp. 40290–40310, 2024, doi: 10.1109/ACCESS.2024.3374108.
[24] J. Nodirov, A. B. Abdusalomov, and T. K. Whangbo, “Attention 3D U-Net with Multiple Skip Connections for Segmentation of Brain Tumor Images,” Sensors, vol. 22, no. 17, Art. no. 17, Jan. 2022, doi: 10.3390/s22176501.
[25] R. Dwivedi et al., “Explainable AI (XAI): Core Ideas, Techniques, and Solutions,” ACM Comput. Surv., vol. 55, no. 9, pp. 1–33, Sep. 2023, doi: 10.1145/3561048.
[26] M. Lucas, M. Lerma, J. Furst, and D. Raicu, “RSI-Grad-CAM: Visual Explanations from Deep Networks via Riemann-Stieltjes Integrated Gradient-Based Localization,” in Advances in Visual Computing, G. Bebis, B. Li, A. Yao, Y. Liu, Y. Duan, M. Lau, R. Khadka, A. Crisan, and R. Chang, Eds., Cham: Springer International Publishing, 2022, pp. 262–274. doi: 10.1007/978-3-031-20713-6_20.