Word Embedding for Text Classification: Efficient CNN and Bi-GRU Fusion Multi Attention Mechanism

Authors

  • Yalamanchili Salini V R Siddartha Engineering College
  • Poluru Eswaraiah Vellore Institute of Technology University image/svg+xml
  • M. Veera Brahmam Vellore Institute of Technology University image/svg+xml
  • Uddagiri Sirisha P V P Siddhartha Institute of Technology

DOI:

https://doi.org/10.4108/eetsis.3992

Keywords:

Text categorization, Deep learning, Convolution neural network, CNN, Gate recurrent unit, GRU, Attention

Abstract

The proposed methodology for the task of text classification involves the utilization of a deep learning algorithm that integrates the characteristics of a fusion model. The present model is comprised of several attention-based Convolutional Neural Networks (CNNs) and Gate Recurrent Units (GRUs) that are organized in a cyclic neural network. The Efficient CNN and Bi-GRU Fusion Multi Attention Mechanism is a method that integrates convolutional neural networks (CNNs) and bidirectional Gated Recurrent Units (Bi-GRUs) with multi-attention mechanisms in order to enhance the efficacy of word embedding for the purpose of text classification. The proposed design facilitates the extraction of both local and global features of textual feature words and employs an attention mechanism to compute the significance of words in text classification. The fusion model endeavors to enhance the performance of text classification tasks by effectively representing text documents through the combination of CNNs, Bi-GRUs, and multi-attention mechanisms. This approach aims to capture both local and global contextual information, thereby improving the model’s ability to process and analyze textual data. Moreover, the amalgamation of diverse models can potentially augment the precision of text categorization. The study involved conducting experiments on various data sets, including the IMDB film review data set and the THUCNews data set. The results of the study demonstrate that the proposed model exhibits superior performance compared to previous models that relied solely on CNN, LSTM, or fusion models that integrated these architectures. This superiority is evident in terms of accuracy, recall rate, and F1 score.

References

Kim, Y. (2014) Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 .

Xing, S., Wang, Q., Zhao, X., Li, T. et al. (2019) A hierarchical attention model for rating prediction by leveraging user and product reviews. Neurocomputing 332: 417–427.

Zhao, W., Ye, J., Yang, M., Lei, Z., Zhang, S. and Zhao, Z. (2018) Investigating capsule networks with dynamic routing for text classification. arXiv preprint arXiv:1804.00538 .

Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S. and Dean, J. (2013) Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems 26.

Schmidhuber, J. (2015) Deep learning in neural networks: An overview. Neural networks 61: 85–117.

Graves, A. and Schmidhuber, J. (2005) Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural networks 18(5-6): 602–610.

Cho, K., Van Merriënboer, B., Bahdanau, D. and Bengio, Y. (2014) On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 .

Wu, X., Chen, L., Wei, T. and Fan, T. (2019) Sentiment analysis of chinese short text based on self-attention and bi-lstm. Journal of Chinese Information Processing 33(6): 100–107.

Lihua, L. and Xiaolong, H. (2020) Text sentiment analysis based on deep learning [j]. Journal of Hubei university (natural science edition) 42(02): 142–149.

Cai, J., Li, J., Li, W. and Wang, J. (2018) Deeplearning model used in text classification. In 2018 15th international computer conference on wavelet active media technology and information processing (ICCWAMTIP) (IEEE): 123–126.

Lai, S., Xu, L., Liu, K. and Zhao, J. (2015) Recurrent convolutional neural networks for text classification. In Proceedings of the AAAI conference on artificial intelligence, 29.

Zhang, J., Li, Y., Tian, J. and Li, T. (2018) Lstmcnn hybrid model for text classification. In 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC) (IEEE): 1675– 1680.

She, X. and Zhang, D. (2018) Text classification based on hybrid cnn-lstm hybrid model. In 2018 11th International symposium on computational intelligence and design (ISCID) (IEEE), 2: 185–189.

Wang, G., Li, C., Wang, W., Zhang, Y., Shen, D., Zhang, X., Henao, R. et al. (2018) Joint embedding of words and labels for text classification. arXiv preprint arXiv:1805.04174 .

Luo, L.x. (2019) Network text sentiment analysis method combining lda text representation and gru-cnn. Personal and Ubiquitous Computing 23(3-4): 405–412.

Salini, Y. and Harikiran, J. (2023) Multiplicative vector fusion model for detecting deepfake news in social media. Applied Sciences 13(7): 4207.

Ma, Y., Chen, H., Wang, Q. and Zheng, X. (2022) Text classification model based on cnn and bigru fusion attention mechanism. In ITM Web of Conferences (EDP Sciences), 47: 02040.

Johnson, R. and Zhang, T. (2014) Effective use of word order for text categorization with convolutional neural networks. arXiv preprint arXiv:1412.1058 .

Al-kanan, H., Yang, X. and Li, F. (2020) Improved estimation for saleh model and predistortion of power amplifiers using 1-db compression point. The Journal of Engineering 2020(1): 13–18.

Downloads

Published

26-09-2023

How to Cite

1.
Salini Y, Eswaraiah P, Brahmam MV, Sirisha U. Word Embedding for Text Classification: Efficient CNN and Bi-GRU Fusion Multi Attention Mechanism. EAI Endorsed Scal Inf Syst [Internet]. 2023 Sep. 26 [cited 2024 May 20];10(6). Available from: https://publications.eai.eu/index.php/sis/article/view/3992