Compression and Transmission of Big AI Model Based on Deep Learning

Compression and Transmission of Big AI Model Based on Deep Learning

Authors

DOI:

https://doi.org/10.4108/eetsis.3803

Keywords:

Big AI model, compression and transmission, deep learning, convolutional networks

Abstract

In recent years, big AI models have demonstrated remarkable performance in various artificial intelligence (AI) tasks. However, their widespread use has introduced significant challenges in terms of model transmission and training. This paper addresses these challenges by proposing a solution that involves the compression and transmission of large models using deep learning techniques, thereby ensuring the efficiency of model training. To achieve this objective, we leverage deep convolutional networks to design a novel approach for compressing and transmitting large models. Specifically, deep convolutional networks are employed for model compression, providing an effective means to reduce the size of large models without compromising their representational capacity. The proposed framework also includes carefully devised encoding and decoding strategies to guarantee the restoration of model integrity after transmission. Furthermore, a tailored loss function is designed for model training, facilitating the optimization of both the transmission and training performance within the system. Through experimental evaluation, we demonstrate the efficacy of the proposed approach in addressing the challenges associated with large model transmission and training. The results showcase the successful compression and subsequent accurate reconstruction of large models, while maintaining their performance across various AI tasks. This work contributes to the ongoing research in enhancing the practicality and efficiency of deploying large models in real-world AI applications.

References

A. E. Haddad and L. Najafizadeh, “The discriminative discrete basis problem: Definitions, algorithms, benchmarking, and application to brain’s functional dynamics,” IEEE Trans. Signal Process., vol. 71, pp. 1–16, 2023.

R. Gabrys, S. Pattabiraman, and O. Milenkovic, “Reconstruction of sets of strings from prefix/suffix compositions,” IEEE Trans. Commun., vol. 71, no. 1, pp. 3–12, 2023.

L. Liu, J. Zhang, S. Song, and K. B. Letaief, “Hierarchical federated learning with quantization: Convergence analysis and system design,” IEEE Trans. Wirel. Commun., vol. 22, no. 1, pp. 2–18, 2023.

F. L. Andrade, M. A. T. Figueiredo, and J. Xavier, “Distributed banach-picard iteration: Application to distributed parameter estimation and PCA,” IEEE Trans. Signal Process., vol. 71, pp. 17–30, 2023.

Q.Wang, S. Cai, Y.Wang, and X. Ma, “Free-ride feedback and superposition retransmission over LDPC coded links,” IEEE Trans. Commun., vol. 71, no. 1, pp. 13–25, 2023.

Z. Xie, W. Chen, and H. V. Poor, “A unified framework for pushing in two-tier heterogeneous networks with mmwave hotspots,” IEEE Trans. Wirel. Commun., vol. 22, no. 1, pp. 19–31, 2023.

O. Lang, C. Hofbauer, R. Feger, and M. Huemer, “Rangedivision multiplexing for MIMO OFDM joint radar and communications,” IEEE Trans. Veh. Technol., vol. 72, no. 1, pp. 52–65, 2023.

M. Hellkvist, A. Özçelikkale, and A. Ahlén, “Estimation under model misspecification with fake features,” IEEE Trans. Signal Process., vol. 71, pp. 47–60, 2023.

Z. Xuan and K. Narayanan, “Low-delay analog joint source-channel coding with deep learning,” IEEE Trans. Commun., vol. 71, no. 1, pp. 40–51, 2023.

F. Hu, Y. Deng, and A. H. Aghvami, “Scalable multiagent reinforcement learning for dynamic coordinated multipoint clustering,” IEEE Trans. Commun., vol. 71, no. 1, pp. 101–114, 2023.

H. Hui and W. Chen, “Joint scheduling of proactive pushing and on-demand transmission over shared spectrum for profit maximization,” IEEE Trans. Wirel. Commun., vol. 22, no. 1, pp. 107–121, 2023.

S. Liu and L. Ji, “Double multilevel constructions for constant dimension codes,” IEEE Trans. Inf. Theory, vol. 69, no. 1, pp. 157–168, 2023.

Q. Pan, Z. Qiu, Y. Xu, and G. Yao, “Predicting the price of second-hand housing based on lambda architecture and kd tree,” Infocommunications Journal, vol. 14, no. 1, pp. 2–10, 2022.

Z. Zhang, Z. Shi, and Y. Gu, “Ziv-zakai bound for doas estimation,” IEEE Trans. Signal Process., vol. 71, pp. 136–149, 2023.

S. Guo and X. Zhao, “Multi-agent deep reinforcement learning based transmission latency minimization for delay-sensitive cognitive satellite-uav networks,” IEEE Trans. Commun., vol. 71, no. 1, pp. 131–144, 2023.

X. Fang, W. Feng, Y. Wang, Y. Chen, N. Ge, Z. Ding, and H. Zhu, “Noma-based hybrid satellite-uav-terrestrial networks for 6g maritime coverage,” IEEE Trans. Wirel. Commun., vol. 22, no. 1, pp. 138–152, 2023.

R. Gabrys, V. Guruswami, J. L. Ribeiro, and K. Wu, “Beyond single-deletion correcting codes: Substitutions and transpositions,” IEEE Trans. Inf. Theory, vol. 69, no. 1, pp. 169–186, 2023.

X. Niu and E. Wei, “Fedhybrid: A hybrid federated optimization method for heterogeneous clients,” IEEE Trans. Signal Process., vol. 71, pp. 150–163, 2023.

R. Yang, Z. Zhang, X. Zhang, C. Li, Y. Huang, and L. Yang, “Meta-learning for beam prediction in a dualband communication system,” IEEE Trans. Commun., vol. 71, no. 1, pp. 145–157, 2023.

X. Chen, W. Wei, Q. Yan, N. Yang, and J. Huang, “Timedelay deep q-network based retarder torque tracking control framework for heavy-duty vehicles,” IEEE Trans. Veh. Technol., vol. 72, no. 1, pp. 149–161, 2023.

Z. Yang, F. Li, and D. Zhang, “A joint model extraction and data detection framework for IRS-NOMA system,” IEEE Trans. Signal Process., vol. 71, pp. 164–177, 2023.

T. Zhang, K. Zhu, S. Zheng, D. Niyato, and N. C. Luong, “Trajectory design and power control for joint radar and communication enabled multi-uav cooperative detection systems,” IEEE Trans. Commun., vol. 71, no. 1, pp. 158–172, 2023.

N. Zhang, M. Tao, J.Wang, and F. Xu, “Fundamental limits of communication efficiency for model aggregation in distributed learning: A rate-distortion approach,” IEEE Trans. Commun., vol. 71, no. 1, pp. 173–186, 2023.

X. Yue, J. Xie, Y. Liu, Z. Han, R. Liu, and Z. Ding, “Simultaneously transmitting and reflecting reconfigurable intelligent surface assisted NOMA networks,” IEEE Trans. Wirel. Commun., vol. 22, no. 1, pp. 189–204, 2023.

M. Zhang, H. Zhang, Y. Fang, and D. Yuan, “Learning based data transmissions for future 6g enabled industrial iot: A data compression perspective,” IEEE Network, vol. 36, no. 5, pp. 180–187, 2022.

X. Zhang, X. Zhu, J. Wang, H. Yan, H. Chen, and W. Bao, “Federated learning with adaptive communication compression under dynamic bandwidth and unreliable networks,” Information Sciences, vol. 540, pp. 242–262, 2020.

Downloads

Published

11-12-2023

How to Cite

1.
Lin Z, Zhou Y, Yang Y, Shi J, Lin J. Compression and Transmission of Big AI Model Based on Deep Learning: Compression and Transmission of Big AI Model Based on Deep Learning. EAI Endorsed Scal Inf Syst [Internet]. 2023 Dec. 11 [cited 2024 May 6];11(2). Available from: https://publications.eai.eu/index.php/sis/article/view/3803

Most read articles by the same author(s)