An Efficient Privacy-Preserving Secure Aggregation Scheme for Federated Learning with Input Verification and Dropout Resistance
DOI:
https://doi.org/10.4108/eetsis.11991Keywords:
Secure Aggregation, Privacy Preservation, Secret Sharing, Zero-Knowledge ProofsAbstract
Federated learning, as a distributed machine learning paradigm, allows multiple participants to collaboratively train a shared model without sharing their local data. However, the increasing demand for privacy protection during data aggregation within distributed systems underscores the persistent challenge of ensuring both security and efficiency. Many existing Privacy-Preserving Machine Learning (PPML) schemes relying on homomorphic encryption introduce substantial computational overhead during aggregation, rendering them impractical for large-scale PPML applications involving resource-constrained participant devices. Moreover, device dropout events and data poisoning attacks perpetrated by malicious clients adversely affect the integrity of the aggregated results. To address these challenges, this paper proposes an efficient privacy-preserving secure aggregation scheme capable of tolerating participant dropout at arbitrary stages and securing data against both semi-honest and malicious participants. By integrating input verification protocols and applying gradient masking techniques, the scheme enhances its resilience against malicious attacks while ensuring user data privacy. Leveraging the additive homomorphic property of Shamir's secret sharing enables efficient global mask recovery, significantly optimizing the scheme's efficiency. Experimental results demonstrate that the proposed scheme significantly outperforms baseline methods in computational efficiency, communication overhead, and security robustness. By effectively balancing high privacy protection with practical feasibility, this scheme presents a promising solution for secure multi-party aggregation in large-scale distributed systems.
References
[1] McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data. Artificial intelligence and statistics. PMLR, 2017: 1273-1282.
[2] Voigt P, Von dem Bussche A. The eu general data protection regulation (gdpr). A practical guide, 1st ed., Cham: Springer International Publishing, 2017, 10(3152676): 10-5555.
[3] Cao K, Liu Y, Meng G, et al. An overview on edge computing research. IEEE access, 2020, 8: 85714-85728.
[4] Huda M N, Talukder MB, Kumar S. Securing Healthcare AI: Applied Federal Learning[M]//Revolutionizing Healthcare 5.0: The Power of Generative AI: Advancements in Patient Care Through Generative AI Algorithms. Cham: Springer Nature Switzerland, 2025: 255-272.
[5] Rabbani H, Shahid MF, Khanzada TJS, et al. Enhancing security in financial transactions: a novel blockchain-based federated learning framework for detecting counterfeit data in fintech. PeerJ Computer Science, 2024, 10: e2280.
[6] Pei J, Li S, Yu Z, et al. Federated learning encounters 6G wireless communication in the scenario of internet of things. IEEE Communications Standards Magazine, 2023, 7(1): 94-100.
[7] Alotaibi B, Khan FA, Mahmood S. Communication Efficiency and Non-Independent and Identically Distributed Data Challenge in Federated Learning: A Systematic Mapping Study. Applied Sciences, 2024, 14(7): 2720.
[8] Varady T, Martin RR, Cox J. Reverse engineering of geometric models—an introduction. Computer-aided design, 1997, 29(4): 255-268.
[9] Shokri R, Stronati M, Song C, et al. Membership inference attacks against machine learning models//2017 IEEE symposium on security and privacy (SP). IEEE, 2017: 3-18.
[10] Fang H, Qian Q. Privacy preserving machine learning with homomorphic encryption and federated learning. Future Internet, 2021, 13(4): 94.
[11] Park J, Lim H. Privacy-preserving federated learning using homomorphic encryption. Applied Sciences, 2022, 12(2): 734.
[12] Zhang Z, Ma X, Ma J. Local differential privacy based membership-privacy-preserving federated learning for deep-learning-driven remote sensing. Remote Sensing, 2023, 15(20): 5050.
[13] Guo S, Wang X, Long S, et al. A federated learning scheme meets dynamic differential privacy. CAAI Transactions on Intelligence Technology, 2023, 8(3): 1087-1100.
[14] Liu Z, Guo J, Lam K Y, et al. Efficient dropout-resilient aggregation for privacy-preserving machine learning. IEEE Transactions on Information Forensics and Security, 2022, 18: 1839-1854.
[15] Bell J H, Bonawitz K A, Gascón A, et al. Secure single-server aggregation with (poly) logarithmic overhead. Proceedings of the 2020 ACM SIGSAC conference on computer and communications security. 2020: 1253-1269.
[16] Qi P, Chiaro D, Guzzo A, et al. Model aggregation techniques in federated learning: A comprehensive survey. Future Generation Computer Systems, 2024, 150: 272-293.
[17] Fan H, Huang C, Liu Y. Federated learning-based privacy-preserving data aggregation scheme for IIoT. IEEE Access, 2022, 11: 6700-6707.
[18] Hongbin F, Zhi Z. Privacy-preserving data aggregation scheme based on federated learning for IIoT. Mathematics, 2023, 11(1): 214.
[19] Fereidooni H, Marchal S, Miettinen M, et al. SAFELearn: Secure aggregation for private federated learning. 2021 IEEE Security and Privacy Workshops (SPW). IEEE, 2021: 56-62.
[20] Lamport L, Shostak R, Pease M. The Byzantine generals problem[M]//Concurrency: the works of leslie lamport. 2019: 203-226.
[21] Lam M, Wei G Y, Brooks D, et al. Gradient disaggregation: Breaking privacy in federated learning by reconstructing the user participant matrix. International Conference on Machine Learning. PMLR, 2021: 5959-5968.
[22] Nasr M, Shokri R, Houmansadr A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. 2019 IEEE symposium on security and privacy (SP). IEEE, 2019: 739-753.
[23] Lessage X, Collier L, Van Ouytsel CHB, et al. Secure federated learning applied to medical imaging with fully homomorphic encryption. 2024 IEEE 3rd International Conference on AI in Cybersecurity (ICAIC). IEEE, 2024: 1-12.
[24] Hosseini E, Chen S, Khisti A. Secure Aggregation in Federated Learning using Multiparty Homomorphic Encryption. arXiv preprint arXiv:2503.00581, 2025.
[25] Liu Z, Chen S, Ye J, et al. SASH: Efficient secure aggregation based on SHPRG for federated learning. Uncertainty in Artificial Intelligence. PMLR, 2022: 1243-1252.
[26] Xu G, Li H, Liu S, et al. VerifyNet: Secure and verifiable federated learning. IEEE Transactions on Information Forensics and Security, 2019, 15: 911-926.
[27] Kadhe S, Rajaraman N, Koyluoglu OO, et al. Fastsecagg: Scalable secure aggregation for privacy-preserving federated learning. arXiv 2020. arXiv preprint arXiv:2009.11248.
[28] Han L, Fan D, Liu J, et al. Federated learning differential privacy preservation method based on differentiated noise addition. 2023 8th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA). IEEE, 2023: 285-289.
[29] Zhang L, Xu J, Sivaraman A, et al. A two-stage differential privacy scheme for federated learning based on edge intelligence. IEEE journal of biomedical and health informatics, 2023, 28(6): 3349-3360.
[30] Regev O. On lattices, learning with errors, random linear codes, and cryptography. Journal of the ACM (JACM), 2009, 56(6): 1-40.
[31] Shamir A. How to share a secret. Communications of the ACM, 1979, 22(11): 612-613.
[32] Pedersen T P. Non-interactive and information-theoretic secure verifiable secret sharing. Annual international cryptology conference. Berlin, Heidelberg: Springer Berlin Heidelberg, 1991: 129-140.
[33] Bünz B, Bootle J, Boneh D, et al. Bulletproofs: Short proofs for confidential transactions and more. 2018 IEEE symposium on security and privacy (SP). IEEE, 2018: 315-334.
[34] Biggio B, Nelson B, Laskov P. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389, 2012.
[35] Yerlikaya F A, Bahtiyar Ş. Data poisoning attacks against machine learning algorithms. Expert Systems with Applications, 2022, 208: 118101.
[36] Xing Z, Zhang Z, Liu J, et al. Zero-knowledge proof meets machine learning in verifiability: A survey. arXiv preprint arXiv:2310.14848, 2023.
[37] Cao X, Fang M, Liu J, et al. Fltrust: Byzantine-robust federated learning via trust bootstrapping. arXiv preprint arXiv:2012.13995, 2020.
[38] Camenisch J, Stadler M. Proof systems for general statements about discrete logarithms. Technical Report/ETH Zurich, Department of Computer Science, 1997, 260.
[39] Bell J, Gascón A, Lepoint T, et al. {ACORN}: input validation for secure aggregation. 32nd USENIX Security Symposium (USENIX Security 23). 2023: 4805-4822.
[40] Lycklama H, Burkhalter L, Viand A, et al. Rofl: Robustness of secure federated learning. 2023 IEEE Symposium on Security and Privacy (SP). IEEE, 2023: 453-476.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Zijun Guo, Yuteng Sun, Xinyue Zhang, Lingling Wu

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.
