A Hybrid Deep Learning GRU based Approach for Text Classification using Word Embedding

Authors

DOI:

https://doi.org/10.4108/eetiot.4590

Keywords:

Text Classification, Natural Language Processing, RNN, GRU, LSTM

Abstract

Text categorization has become an increasingly important issue for businesses that handle massive volumes of data generated online, and it has found substantial use in the field of NLP. The capacity to group texts into separate categories is crucial for users to effectively retain and utilize important information. Our goal is to improve upon existing recurrent neural network (RNN) techniques for text classification by creating a deep learning strategy through our study. Raising the quality of the classifications made is the main difficulty in text classification, nevertheless, as the overall efficacy of text classification is often hampered by the data semantics' inadequate context sensitivity. Our study presents a unified approach to examine the effects of word embedding and the GRU on text classification to address this difficulty. In this study, we use the TREC standard dataset. RCNN has four convolution layers, four LSTM levels, and two GRU layers. RNN, on the other hand, has four GRU layers and four LSTM levels. One kind of recurrent neural network (RNN) that is well-known for its comprehension of sequential data is the gated recurrent unit (GRU). We found in our tests that words with comparable meanings are typically found near each other in embedding spaces. The trials' findings demonstrate that our hybrid GRU model is capable of efficiently picking up word usage patterns from the provided training set. Remember that the depth and breadth of the training data greatly influence the model's effectiveness. Our suggested method performs remarkably well when compared to other well-known recurrent algorithms such as RNN, MV-RNN, and LSTM on a single benchmark dataset. In comparison to the hybrid GRU's F-measure 0.952, the proposed model's F-measure is 0.982%. We compared the performance of the proposed method to that of the three most popular recurrent neural network designs at the moment RNNs, MV-RNNs, and LSTMs, and found that the new method achieved better results on two benchmark datasets, both in terms of accuracy and error rate.

Downloads

Download data is not yet available.
<br data-mce-bogus="1"> <br data-mce-bogus="1">

References

J. Protasiewicz, “A recent overview of the state-of-the-art elements of text classification,” Expert Syst. Appl., vol. 106, pp. 36–54, 2018. DOI: https://doi.org/10.1016/j.eswa.2018.03.058

W. Sharif, N. A. Samsudin, M. M. Deris, and M. Aamir, “Improved relative discriminative criterion feature ranking technique for text classification,” Int. J. Artif. Intell., vol. 15, no. 2, pp. 61–78, 2017.

Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, and J. Garcia-Rodriguez, “A Review on Deep Learning Techniques Applied to Semantic Segmentation,” pp. 1–23, 2017. DOI: https://doi.org/10.1016/j.asoc.2018.05.018

M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks,” IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1717–1724, 2014. DOI: https://doi.org/10.1109/CVPR.2014.222

D. Tang, F. Wei, B. Qin, N. Yang, T. Liu, and M. Zhou, “Sentiment Embeddings with Applications to Sentiment Analysis,” IEEE Trans. Knowl. Data Eng., vol. 28, no. October, pp. 496–509, 2016. DOI: https://doi.org/10.1109/TKDE.2015.2489653

R. Zhao, W. Ouyang, H. Li, and X. Wang, “Saliency detection bymulti-context deep learning,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07–12–June, pp. 1265–1274, 2015. DOI: https://doi.org/10.1109/CVPR.2015.7298731

O. I. and C. Cardie, “Deep Recursive Neural Networks for Compositionality in Language,” Adv. neural Inf. Process. Syst., pp. 2096–2104, 2014.

Dahou, M. A. Elaziz, J. Zhou, and S. Xiong, “Arabic Sentiment Classification Using Convolutional Neural Network and Differential Evolution Algorithm,” Comput. Intell. Neurosci., vol. 2019, pp. 1–16, 2019. DOI: https://doi.org/10.1155/2019/2537689

S. Hochreiter, “Long Short Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1–32, 1997. DOI: https://doi.org/10.1162/neco.1997.9.8.1735

K. Cho, “On the Properties of Neural Machine Translation: Encoder– Decoder Approaches,” arXiv, vol. 5, pp. 1–9, 2014. DOI: https://doi.org/10.3115/v1/W14-4012

R. Collobert and J. Weston, “A unified architecture for natural language processing,” Proc. 25th Int. Conf. Mach. Learn. - ICML ’08, pp. 160–167, 2008. DOI: https://doi.org/10.1145/1390156.1390177

R. Socher, A. Perelygin, and J. Wu, “Recursive deep models for semantic compositionality over a sentiment treebank,” Proc. …, no. October, pp. 1631–1642, 2013.

M. Iyyer, J. Boyd-Graber, L. Claudino, R. Socher, and H. Daumé III, “A Neural Network for Factoid Question Answering over Paragraphs,” Proc. 2014 Conf. Empir. Methods Nat. Lang. Process., pp. 633–644, 2014. DOI: https://doi.org/10.3115/v1/D14-1070

S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning, “A large annotated corpus for learning natural language inference,” 2015. DOI: https://doi.org/10.18653/v1/D15-1075

Kumar et al., “Ask Me Anything: Dynamic Memory Networks for Natural Language Processing,” vol. 48, 2015.

M. Ravanelli, P. Brakel, M. Omologo, and Y. Bengio, “Light Gated Recurrent Units for Speech Recognition,” IEEE Trans. Emerg. Top. Comput. Intell., vol. 2, no. 2, pp. 92–102, 2018. DOI: https://doi.org/10.1109/TETCI.2017.2762739

T. Mikolov, J. Kopecky, L. Burget, O. Glembek, and J. Cernocky, “Neural network based language models for highly inflective languages,” Icassp-2009, pp. 4725–4728, 2009. DOI: https://doi.org/10.1109/ICASSP.2009.4960686

T. Mikolov, G. Corrado, K. Chen, and J. Dean, “Efficient Estimation ofWord Representations in Vector Space,” arXiv Prepr. arXiv1301.3781, pp. 1–12, 2013.

M. J. Berger, “Large Scale Multi-label Text Classification with Semantic Word Vectors,” Tech. Rep., pp. 1–8, 2014.

P. Liu, X. Qiu, and X. Huang, “Recurrent Neural Network for Text Classification with Multi-Task Learning,” Proc. 25th Int. Jt. Conf. Artif. Intell. IJCAI-16, p. to appear, 2016.

Y. Xiao and K. Cho, “Efficient Character-level Document Classification by Combining Convolution and Recurrent Layers,” arXiv, vol. 1602, no. 00367, 2016.

Karpathy, “Deep Visual-Semantic Alignments for Generating Image Descriptions.”

X. Phan, “Learning to Classify Short and Sparse Text & Web with Hidden Topics from Large-scale Data Collections,” Proc. 17th Int. Conf. World Wide Web, pp. 91–100, 2008. DOI: https://doi.org/10.1145/1367497.1367510

R. Johnson and T. Zhang, “Effective Use of Word Order for Text Categorization with Convolutional Neural Networks,” no. 2011, 2014. DOI: https://doi.org/10.3115/v1/N15-1011

D. Roth, “Learning Question Classifiers £,” Proc. 19th Int. Conf. Comput. Linguist., vol. 1, no. August, pp. 1–7, 2002.

H. Lee, “for Modeling Sentences and Documents,” Proc. 15th Annu. Conf. North Am. Chapter Assoc. Comput., no. June, pp. 1512–521, 2015.

D. P. Kingma and J. L. Ba, “A method for stochastic optimization,” arXiv, no. March, pp. 1–15, 2015. Hinton, “Dropout : A Simple Way to Prevent Neural Networks from Overfitting,” J. Mach. Learn. Res. 2014, 15, vol. 15, pp. 1929– 1958, 2014.

Sunagar, P., & Kanavalli, A. (2022). A Hybrid RNN based Deep Learning Approach for Text Classification. International Journal of Advanced Computer Science and Applications, 13(6). DOI: https://doi.org/10.14569/IJACSA.2022.0130636

Zulqarnain, M., Ghazali, R., Ghouse, M. G., & Mushtaq, M. F. (2019). Efficient processing of GRU based on word embedding for text classification. JOIV: International Journal on Informatics Visualization, 3(4), 377-383. DOI: https://doi.org/10.30630/joiv.3.4.289

Arava, K., Chaitanya, R. S. K., Sikindar, S., Praveen, S. P., & Swapna, D. (2022, August). Sentiment Analysis using deep learning for use in recommendation systems of various public media applications. In 2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC) (pp. 739-744). IEEE. DOI: https://doi.org/10.1109/ICESC54411.2022.9885648

Ghosh, H., Tusher, M.A., Rahat, I.S., Khasim, S., Mohanty, S.N. (2023). Water Quality Assessment Through Predictive Machine Learning. In: Intelligent Computing and Networking. IC-ICN 2023. Lecture Notes in Networks and Systems, vol 699. Springer, Singapore. https://doi.org/10.1007/978-981-99-3177-4_6 DOI: https://doi.org/10.1007/978-981-99-3177-4_6

Alenezi, F.; Armghan, A.; Mohanty, S.N.; Jhaveri, R.H.; Tiwari, P. Block-Greedy and CNN Based Underwater Image Dehazing for Novel Depth Estimation and Optimal Ambient Light. Water 2021, 13, 3470. https://doi.org/10.3390/w13233470 DOI: https://doi.org/10.3390/w13233470

G. P. Rout and S. N. Mohanty, "A Hybrid Approach for Network Intrusion Detection," 2015 Fifth International Conference on Communication Systems and Network Technologies, Gwalior, India, 2015, pp. 614-617, doi: 10.1109/CSNT.2015.76. DOI: https://doi.org/10.1109/CSNT.2015.76

Downloads

Published

13-12-2023

How to Cite

[1]
P. Eswaraiah and H. Syed, “A Hybrid Deep Learning GRU based Approach for Text Classification using Word Embedding”, EAI Endorsed Trans IoT, vol. 10, Dec. 2023.