Circumventing Stragglers and Staleness in Distributed CNN using LSTM

Authors

DOI:

https://doi.org/10.4108/eetiot.5119

Keywords:

Convolutional Neural Network, AWS Sage Maker, Distributed Framework, Parameter Server, Exa-Scale Computing, Distributed Autotuning

Abstract

INTRODUCTION: Using neural networks for these inherently distributed applications is challenging and time-consuming. There is a crucial need for a framework that supports a distributed deep neural network to yield accurate results at an accelerated time.

METHODS: In the proposed framework, any experienced novice user can utilize and execute the neural network models in a distributed manner with the automated hyperparameter tuning feature. In addition, the proposed framework is provided in AWS Sage maker for scaling the distribution and achieving exascale FLOPS. We benchmarked the framework performance by applying it to a medical dataset.

RESULTS: The maximum performance is achieved with a speedup of 6.59 in 5 nodes. The model encourages expert/ novice neural network users to apply neural network models in the distributed platform and get enhanced results with accelerated training time. There has been a lot of research on how to improve the training time of Convolutional Neural Networks (CNNs) using distributed models, with a particular emphasis on automating the hyperparameter tweaking process. The study shows that training times may be decreased across the board by not just manually tweaking hyperparameters, but also by using L2 regularization, a dropout layer, and ConvLSTM for automatic hyperparameter modification.

CONCLUSION: The proposed method improved the training speed for model-parallel setups by 1.4% and increased the speed for parallel data by 2.206%. Data-parallel execution achieved a high accuracy of 93.3825%, whereas model-parallel execution achieved a top accuracy of 89.59%.

Downloads

Download data is not yet available.
<br data-mce-bogus="1"> <br data-mce-bogus="1">

References

Ravikumar, A, Sriraman, H, Sai Saketh, M, Lokesh, S, Karanam, A. Effect of neural network structure in accelerating performance and accuracy of a convolutional neural network with GPU/TPU for image analytics. PeerJ Computer Science. 2022; Vol. 8: pp. e909. DOI: https://doi.org/10.7717/peerj-cs.909

Ravikumar, A, Sriraman, H, Sai Saketh, M, Lokesh. Identifying Pitfalls and Solutions in Parallelizing Long Short-Term Memory Network on Graphical Processing Unit by Comparing with Tensor Processing Unit Parallelism. Inventive Computation and Information Technologies; 2/3/2023; India. Springer; 2023. pp. 111–125. DOI: https://doi.org/10.1007/978-981-19-7402-1_9

S. Harini and A. Ravikumar. Effect of Parallel Workload on Dynamic Voltage Frequency Scaling for Dark Silicon Ameliorating. International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 2020; pp. 1012-1017 DOI: https://doi.org/10.1109/ICOSEC49089.2020.9215262

Ravikumar, A, Sriraman, H. Real-time pneumonia prediction using pipelined spark and high-performance computing. PeerJ Computer Science. 2023; Vol. 9: pp. e1258. DOI: https://doi.org/10.7717/peerj-cs.1258

Ravikumar, A, Sriraman, H. Computationally Efficient Neural Rendering for Generator Adversarial Networks Using a Multi-GPU Cluster in a Cloud Environment. IEEE Access. 2023; vol. 11, pp. 45559-45571. DOI: https://doi.org/10.1109/ACCESS.2023.3274201

Zagoruyko, S, Komodakis, N. Wide Residual Networks. Procedings of the British Machine Vision Conference 2016. pp. 87.1-87.12. DOI: https://doi.org/10.5244/C.30.87

Ravikumar, A. Non-relational multi-level caching for mitigation of staleness & stragglers in distributed deep learning. Proceedings of the 22nd International Middleware Conference, 1021. pp 15–16.

Sriraman, H, Ravikumar, A, Keshwani, N. Malware Prediction Analysis Using AI Techniques with the Effective Preprocessing and Dimensionality Reduction. Innovative Data Communication Technologies and Application, 2022. pp. 153–169. DOI: https://doi.org/10.1007/978-981-16-7167-8_12

Zhuang, D, Chang, J, Li, J. DynaMo: Dynamic Community Detection by Incrementally Maximizing Modularity, IEEE Transactions on Knowledge and Data Engineering, 2021.vol. 33, no. 5, pp. 1934–1945.

Nasr, M, Shokri, R, Houmansadr, A. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning, IEEE Symposium on Security and Privacy (SP), IEEE Computer Society, 2016. pp. 739–753.

Shokri, R, Stronati, M, Song, C, Shmatikov, V. Membership Inference Attacks Against Machine Learning Models, IEEE Symposium on Security and Privacy, 2017. pp. 3–18. DOI: https://doi.org/10.1109/SP.2017.41

Downloads

Published

14-02-2024

How to Cite

[1]
A. Ravikumar, H. Sriraman, S. Lokesh, and J. Sai, “Circumventing Stragglers and Staleness in Distributed CNN using LSTM”, EAI Endorsed Trans IoT, vol. 10, Feb. 2024.