Suicidal Ideation Detection and Influential Keyword Extraction from Twitter using Deep Learning (SID)

Authors

  • Xie-Yi. G. Asia Pacific University of Technology & Innovation image/svg+xml

DOI:

https://doi.org/10.4108/eetpht.10.6042

Keywords:

attention mechanism, Bi-LSTM, deep learning, NLP, text classification

Abstract

INTRODUCTION: This paper focuses on building a text analytics-based solution to help the suicide prevention communities to detect suicidal signals from text data collected from online platform and take action to prevent the tragedy.

OBJECTIVES: The objective of the paper is to build a suicide ideation detection (SID) model that can classify text as suicidal or non-suicidal and a keyword extractor to extracted influential keywords that are possible suicide risk factors from the suicidal text.

METHODS: This paper proposed an attention-based Bi-LSTM model. An attention layer can assist the deep learning model to capture influential keywords of the model classifying decisions and hence reflects the important keywords from text which highly related to suicide risk factors or reason of suicide ideation that can be extracted from text.

RESULTS: Bi-LSTM with Word2Vec embedding have the highest F1-score of 0.95. Yet, attention-based Bi-LSTM with word2vec embedding that has 0.94 F1-score can produce better accuracy when dealing with new and unseen data as it has a good fit learning curve.

CONCLUSION: The absence of a systematic approach to validate and examine the keyword extracted by the attention mechanism and RAKE algorithm is a gap that needed to be resolved. The future work of this paper can focus on both systematic and standard approach for validating the accuracy of the keywords.

Downloads

Download data is not yet available.

References

“World Health Organization (WHO),” “Home/Newsroom/Fact sheets/Detail/Suicide,” WHO, 2021. https://www.who.int/news-room/fact-sheets/detail/suicide (accessed Jun. 22, 2023).

“Centers for Disease Control and Prevention (CDC),” “Facts About Suicide,” CDC, 2022. https://www.cdc.gov/suicide/facts/index.html (accessed Apr. 23, 2023).

E. D. Ballard, J. R. Gilbert, C. Wusinich, and C. A. Zarate, “New Methods for Assessing Rapid Changes in Suicide Risk,” Front Psychiatry, vol. 12, Jan. 2021, doi: 10.3389/fpsyt.2021.598434. DOI: https://doi.org/10.3389/fpsyt.2021.598434

M. Chatterjee, P. Kumar, P. Samanta, and D. Sarkar, “Suicide ideation detection from online social media: A multi-modal feature based technique,” International Journal of Information Management Data Insights, vol. 2, no. 2, p. 100103, Nov. 2022, doi: 10.1016/j.jjimei.2022.100103. DOI: https://doi.org/10.1016/j.jjimei.2022.100103

M. M. Tadesse, H. Lin, B. Xu, and L. Yang, “Detection of Suicide Ideation in Social Media Forums Using Deep Learning,” Algorithms, vol. 13, no. 1, p. 7, Dec. 2019, doi: 10.3390/a13010007. DOI: https://doi.org/10.3390/a13010007

T. M. DeJong, J. C. Overholser, and C. A. Stockmeier, “Apples to oranges?: A direct comparison between suicide attempters and suicide completers,” J Affect Disord, vol. 124, no. 1–2, pp. 90–97, Jul. 2010, doi: 10.1016/j.jad.2009.10.020. DOI: https://doi.org/10.1016/j.jad.2009.10.020

B. Desmet and V. Hoste, “Emotion detection in suicide notes,” Expert Syst Appl, vol. 40, no. 16, pp. 6351–6358, Nov. 2013, doi: 10.1016/j.eswa.2013.05.050. DOI: https://doi.org/10.1016/j.eswa.2013.05.050

T. Zhang, A. M. Schoene, and S. Ananiadou, “Automatic identification of suicide notes with a transformer-based deep learning model,” Internet Interv, vol. 25, p. 100422, Sep. 2021, doi: 10.1016/j.invent.2021.100422. DOI: https://doi.org/10.1016/j.invent.2021.100422

T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent Trends in Deep Learning Based Natural Language Processing,” Aug. 2017. DOI: https://doi.org/10.1109/MCI.2018.2840738

D. Raihan, “Deep learning techniques for text classification,” Nanyang Technological University, 2021.

F. Wei, H. Qin, S. Ye, and H. Zhao, “Empirical Study of Deep Learning for Text Classification in Legal Document Review,” Apr. 2019, doi: 10.1109/BigData.2018.8622157. DOI: https://doi.org/10.1109/BigData.2018.8622157

H. Lu, L. Ehwerhemuepha, and C. Rakovski, “A comparative study on deep learning models for text classification of unstructured medical notes with various levels of class imbalance,” BMC Med Res Methodol, vol. 22, no. 1, p. 181, Dec. 2022, doi: 10.1186/s12874-022-01665-y. DOI: https://doi.org/10.1186/s12874-022-01665-y

M. Tang, P. Gandhi, M. A. Kabir, C. Zou, J. Blakey, and X. Luo, “Progress Notes Classification and Keyword Extraction using Attention-based Deep Learning Models with BERT,” Oct. 2019.

J. Brownle, “TensorFlow 2 Tutorial: Get Started in Deep Learning with tf.keras,” Machine Learning Mastery, Aug. 02, 2022.

R. Haque, N. Islam, M. Islam, and M. M. Ahsan, “A Comparative Analysis on Suicidal Ideation Detection Using NLP, Machine, and Deep Learning,” Technologies (Basel), vol. 10, no. 3, p. 57, Apr. 2022, doi: 10.3390/technologies10030057. DOI: https://doi.org/10.3390/technologies10030057

H. Wang and F. Li, “A text classification method based on LSTM and graph attention network,” Conn Sci, vol. 34, no. 1, pp. 2466–2480, Dec. 2022, doi: 10.1080/09540091.2022.2128047. DOI: https://doi.org/10.1080/09540091.2022.2128047

C. Janiesch, P. Zschech, and K. Heinrich, “Machine learning and deep learning,” Electronic Markets, vol. 31, no. 3, pp. 685–695, Sep. 2021, doi: 10.1007/s12525-021-00475-2. DOI: https://doi.org/10.1007/s12525-021-00475-2

G. Van Houdt, C. Mosquera, and G. Nápoles, “A review on the long short-term memory model,” Artif Intell Rev, vol. 53, no. 8, pp. 5929–5955, Dec. 2020, doi: 10.1007/s10462-020-09838-1. DOI: https://doi.org/10.1007/s10462-020-09838-1

N. Nuzulul Khairu, “Text Messages Classification using LSTM, Bi-LSTM, and GRU,” MLearning.ai. Medium, Aug. 21, 2022. https://medium.com/mlearning-ai/the-classification-of-text-messages-using-lstm-bi-lstm-and-gru-f79b207f90ad (accessed May 10, 2023).

G. Liu and J. Guo, “Bidirectional LSTM with attention mechanism and convolutional layer for text classification,” Neurocomputing, vol. 337, pp. 325–338, Apr. 2019, doi: 10.1016/j.neucom.2019.01.078. DOI: https://doi.org/10.1016/j.neucom.2019.01.078

E. Zvornicanin, “Differences Between Bidirectional and Unidirectional LSTM,” Baeldung, Jun. 08, 2022. https://www.baeldung.com/cs/bidirectional-vs-unidirectional-lstm (accessed May 18, 2023).

A. Agarwal, “Sentiment Analysis using Bi-Directional LSTM,” LinkedIn Article, May 20, 2020. https://www.linkedin.com/pulse/sentiment-analysis-using-bi-directional-lstm-ankit-agarwal (accessed May 18, 2023).

D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine Translation by Jointly Learning to Align and Translate,” Sep. 2014.

S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, and J. Gao, “Deep Learning Based Text Classification: A Comprehensive Review,” Apr. 2020. DOI: https://doi.org/10.1145/3439726

A. Chadha and B. Kaushik, “A Hybrid Deep Learning Model Using Grid Search and Cross-Validation for Effective Classification and Prediction of Suicidal Ideation from Social Network Data,” New Gener Comput, vol. 40, no. 4, pp. 889–914, Dec. 2022, doi: 10.1007/s00354-022-00191-1. DOI: https://doi.org/10.1007/s00354-022-00191-1

B. Jang, M. Kim, G. Harerimana, S. Kang, and J. W. Kim, “Bi-LSTM Model to Increase Accuracy in Text Classification: Combining Word2vec CNN and Attention Mechanism,” Applied Sciences, vol. 10, no. 17, p. 5841, Aug. 2020, doi: 10.3390/app10175841. DOI: https://doi.org/10.3390/app10175841

A. Galassi, M. Lippi, and P. Torroni, “Attention in Natural Language Processing,” IEEE Trans Neural Netw Learn Syst, vol. 32, no. 10, pp. 4291–4308, Oct. 2021, doi: 10.1109/TNNLS.2020.3019893. DOI: https://doi.org/10.1109/TNNLS.2020.3019893

J. Li, S. Zhang, Y. Zhang, H. Lin, and J. Wang, “Multifeature Fusion Attention Network for Suicide Risk Assessment Based on Social Media: Algorithm Development and Validation,” JMIR Med Inform, vol. 9, no. 7, p. e28227, Jul. 2021, doi: 10.2196/28227. DOI: https://doi.org/10.2196/28227

E. Snorrason, “Understanding Outliers in Text Data with Transformers, cleanlab, and Topic Modeling,” Towards Data Science, Oct. 07, 2022. https://towardsdatascience.com/understanding-outliers-in-text-data-with-transformers-cleanlab-and-topic-modeling-db3585415a19 (accessed May 20, 2023).

M. Alam, “Avoid Overfitting with Regularization,” Towards Data Science, Dec. 29, 2020. https://towardsdatascience.com/avoid-overfitting-with-regularization-6d459c13a61f (accessed May 28, 2023).

A. S. Shih Win, G. Jia Yi, L. Zi Hui, L. Xiao, and Q. Yi Zhen, “Suicidal Text Detection in Social Media Post,” 2021. Accessed: May 28, 2023. [Online]. Available: https://docs.google.com/viewer?url=https://raw.githubusercontent.com/gohjiayi/suicidal-text-detection/master/docs/Suicidal-Text-Detection_Report.pdf

C. Pathak, “Kernel, Bias and Activity Regularizer : what, when and why,” LinkedIn Articles, Mar. 16, 2021. https://www.linkedin.com/pulse/kernel-bias-activity-regularizer-what-when-why-chiranjit-pathak (accessed May 29, 2023).

M. Darshan, “How do Kernel Regularizers work with neural networks?,” Mystery Vault, Jun. 25, 2022. https://analyticsindiamag.com/kernel-regularizers-with-neural-networks/ (accessed May 29, 2023).

S. Gugger and J. Howard, “AdamW and Super-convergence is now the fastest way to train neural nets,” fast.ia, Jul. 02, 2018. https://www.fast.ai/posts/2018-07-02-adam-weight-decay.html (accessed May 29, 2023).

Knowledge Transfer, “Adam optimizer with learning rate weight decay using AdamW in keras,” Knowledge Transfer, Jan. 13, 2023. https://androidkt.com/adam-optimizer-with-learning-rate-weight-decay-using-adamw-in-keras/ (accessed May 29, 2023).

I. Loshchilov and F. Hutter, “Decoupled Weight Decay Regularization,” Nov. 2017.

Downloads

Published

13-05-2024

How to Cite

1.
G. X-Y. Suicidal Ideation Detection and Influential Keyword Extraction from Twitter using Deep Learning (SID). EAI Endorsed Trans Perv Health Tech [Internet]. 2024 May 13 [cited 2024 Nov. 15];10. Available from: https://publications.eai.eu/index.php/phat/article/view/6042