Classification of Fake News by Fine-tuning Deep Bidirectional Transformers based Language Model
DOI:
https://doi.org/10.4108/eai.13-7-2018.163973Keywords:
Fake news, Transfer learning, Deep learning, Natural language processingAbstract
With the ever-increasing rate of information dissemination and absorption, “Fake News” has become a real menace. People these days often fall prey to fake news that is in line with their perception. Checking the authenticity of news articles manually is a time-consuming and laborious task, thus, giving rise to the requirement for automated computational tools that can provide insights about degree of fake ness for news articles. In this paper, a Natural Language Processing (NLP) based mechanism is proposed to combat this challenge of classifying news articles as either fake or real. Transfer learning on the Bidirectional Encoder Representations from Transformers (BERT) language model has been applied for this task. This paper demonstrates how even with minimal text pre-processing, the fine-tuned BERT model is robust enough to perform significantly well on the downstream task of classification of news articles. In addition, LSTM and Gradient Boosted Tree models have been built to perform the task and comparative results are provided for all three models. Fine-tuned BERT model could achieve an accuracy of 97.021% on NewsFN data and is able to outperform the other two models by approximately eight percent.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 EAI Endorsed Transactions on Scalable Information Systems
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
This is an open access article distributed under the terms of the CC BY-NC-SA 4.0, which permits copying, redistributing, remixing, transformation, and building upon the material in any medium so long as the original work is properly cited.