Few-Shot Classification Of Brain Cancer Images Using Meta-Learning Algorithms
DOI:
https://doi.org/10.4108/eetinis.124.10405Keywords:
Brain cancer, cancer classification, few-shot learning, meta-learningAbstract
The primary objective of deep learning is to have good performance on a large dataset. However, when the model lacks sufficient data, it becomes a challenge to achieve high accuracy in predicting these unfamiliar classes. In fact, the real-world dataset often introduces new classes, and some types of data are difficult to collect or simulate, such as medical images. A subset of machine learning is meta learning, or "learning-to-learn", which can tackle these problems. In this paper, a few-shot classification model is proposed to classify three types of brain cancer: Glioma brain cancer, Meningioma brain cancer, and brain Tumor cancer. To achieve this, we employ an episodic meta-training paradigm that integrates the model-agnostic meta-learning (MAML) framework with a prototypical network (ProtoNet) to train the model. In detail, ProtoNet focuses on learning a metric space by computing distances to class prototypes of each class, while MAML concentrates on finding the optimal initialization parameters for the model to enable the model to learn quickly on a few labeled samples. In addition, we compute and report the average accuracy for the baseline and our methods to assess the quality of the prediction confidence. Simulation results indicate that our proposed approach substantially surpasses the performance of the baseline ResNet18 model, achieving an average accuracy improvement from 46.33% to 92.08% across different few-shot settings. These findings highlight the potential of combining metric-based and optimization-based meta-learning techniques to improve diagnostic support in healthcare applications.
Downloads
References
[1] Z. Chen and X. Huang, “End-to-end learning for lane keeping of self-driving cars,” in Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Jun. 2017, pp. 1856– 1860.
[2] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf, “DeepFace: Closing the gap to human-level performance in face verification,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2014, pp. 1701–1708.
[3] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradientbased learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
[4] G. Litjens et al., “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, Dec. 2017.
[5] H. Greenspan, B. van Ginneken, and R. M. Summers, “Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1153–1159, May 2016.
[6] R. Vilalta and Y. Drissi, “A perspective view and survey of meta-learning,” Artificial Intelligence Review, vol. 18, no. 2, pp. 77–95, Jun. 2002.
[7] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, “Matching networks for one shot learning,” in Proceedings of the 30th International Conference on Neural Information Processing Systems (NeurIPS), Dec. 2016, pp. 3637–3645.
[8] Supriyono, A. P. Wibawa, Suyono, and F. Kurniawan, “Advancements in natural language processing: Implications, challenges, and future directions,” Telematics and Informatics Reports, vol. 16, pp. 100 173–100 189, Dec. 2024.
[9] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in Proceedings of the 34th International Conference on Machine Learning (ICML), Aug. 2017, pp. 1126–1135.
[10] Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a few examples: A survey on few-shot learning,” ACM Computing Surveys, vol. 53, no. 3, pp. 1–34, Jun. 2020.
[11] A. Esteva et al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, Jan. 2017.
[12] V. Prabhu et al., “Few-shot learning for dermatological disease diagnosis,” in Proceedings of the 4th Machine Learning for Healthcare Conference, Aug. 2019, pp. 532–552.
[13] J. C. K. R. Hasan, S. Kim and H. S. Han, “Prototypical few-shot learning for histopathology classification: Leveraging foundation models with adapter architectures,” IEEE Access, vol. 13, pp. 86 356–86 379, May 2025.
[14] H. Oliveira, P. H. Gama, I. Bloch, and R. M. C. Jr., “Meta-learners for few-shot weakly-supervised medical image segmentation,” Pattern Recognition, vol. 153, pp. 110 471–110 483, Sep. 2024.
[15] S. Ravi and H. Larochelle, “Optimization as a model for few-shot learning,” in Proceedings of the International Conference on Learning Representations (ICLR), Apr. 2017, pp. 1–11.
[16] A. Krizhevsky, “Learning multiple layers of features from tiny images,” Ph.D. dissertation, University of Toronto, Apr. 2009.
[17] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, “The caltech-ucsd birds-200-2011 dataset,” California Institute of Technology, Tech. Rep., Jul. 2011.
[18] S. Bauer, R. Wiest, L.-P. Nolte, and M. Reyes, “A survey of MRI-based medical image analysis for brain tumor studies,” Physics in Medicine & Biology, vol. 58, no. 13, pp. R97–R129, Jun. 2013.
[19] O. S. Naren, “Multi cancer dataset,” https://doi.org/10.34740/KAGGLE/DSV/3415848, 2022.
[20] B. H. Menze et al., “The multimodal brain tumor image segmentation benchmark (BRATS),” IEEE Transactions on Medical Imaging, vol. 34, no. 10, pp. 1993–2024, Oct. 2015.
[21] A. Verma and A. K. Yadav, “Brain tumor segmentation with deep learning: Current approaches and future perspectives,” Journal of Neuroscience Methods, vol. 418, pp. 1–33, Jun. 2025.
[22] Y. Deo et al., “Few-shot learning in diffusion models for generating cerebral aneurysm geometries,” in Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI), May 2024, pp. 1–5.
[23] M. Weller et al., “European association for neurooncology (EANO) guidelines for the diagnosis and treatment of adult astrocytic and oligodendroglial gliomas,” The Lancet Oncology, vol. 18, no. 6, pp. e315–e329, Jun. 2017.
[24] S. Rathore, M. Habes, M. A. Iftikhar, A. Shacklett, and C. Davatzikos, “A review on neuroimaging-based classification studies and associated feature extraction methods for alzheimer’s disease and its prodromal stages,” NeuroImage, vol. 155, pp. 530–548, Jul. 2017.
[25] A. R. Dudhe, P. G. Burade, and M. Nikose, “Comprehensive review of brain tumor segmentation and classification using deep learning techniques,” in Proceedings of the IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Jan. 2025, pp. 1–6.
[26] J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” in Proceedings of the 31st International Conference on Neural Information Processing Systems (NeurIPS), Dec. 2017, pp. 4080–4090.
[27] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778.
[28] A. Kirillov et al., “Segment anything,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2023, pp. 4015–4026.
[29] Y. Xian, B. Schiele, and Z. Akata, “Zero-shot learning – a comprehensive evaluation of the good, the bad and the ugly,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 9, pp. 2251–2265, Sep. 2019.
[30] J. Gu, Y. Wang, Y. Chen, K. Cho, and V. O. Li, “Metalearning for low-resource neural machine translation,” in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), Nov. 2018, pp. 3622–3631.
[31] F. Sung et al., “Learning to compare: Relation network for few-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2018, pp. 1199–1208.
[32] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 4700–4708.
[33] M. Tan and Q. V. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning, Jun. 2019, pp. 6105–6114.
[34] A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in Proceedings of the International Conference on Learning Representations (ICLR), May 2021, pp. 1–21.
[35] S. Gull and J. Kim, “Metric-based meta-learning approach for few-shot classification of brain tumors using magnetic resonance images,” Electronics, vol. 14, no. 9, pp. 1–21, May 2025.
[36] O. Russakovsky et al., “ImageNet large scale visual recognition challenge,” International Journal of Computer Vision (IJCV), vol. 115, pp. 211–252, Apr. 2015.
[37] L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT, Aug. 2010, pp. 177–186.
[38] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proceedings of the International Conference on Learning Representations (ICLR), May 2015.
[39] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Tuyet-Nhi Thi Nguyen, Muhammad Fahim, Bradley D. E. McNiven, Quang Nhat Le, Nhan Duc Le

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
This is an open-access article distributed under the terms of the Creative Commons Attribution CC BY 3.0 license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.