2024-06-11 –, Room A
Fake news threatens democracies, public health, and news outlets’ credibility. For this reason, tackling misinformation is an open challenge faced by governments, private companies, and the scientific communities (Vosoughi et al., 2018).
There are many proposed approaches, some based on AI methods, others on fact-checking by human experts, still others on a combination of the two (Manzoor et al., 2019; Nakov et al., 2021). However, the fact that fake news detection algorithms are often owned by private social media companies, and additionally the adoption of “black-boxed” algorithms contribute to the lack of transparency in the fake news identification and filtering process.
Debunker-Assistant is an application that allows users and newspapers to assess the trustworthiness of a news item starting from its headline, body of text and URL. Inspired by the survey provided by Ruffo et al. (2023), it adapts ideas from Natural Language Processing and Network Science to counter the spread of online misinformation.
Its centerpiece is a set of four News Misinformation Indicators: Echo Effect, Alarm Bell, Sensationalism, and Reliability. These indicators are designed on the basis of specific linguistic and network features, such as the absence of sources, non-authority of references, the presence of specific figures of speech or flames, and other stylistic characteristics.
This application works with Italian and is not bounded on a specific topic: it is designed as a general purpose tool that can extract relevant features for assessing the quality of information.
Indeed, its main purposes are: 1) displaying the indicators to deal with misinformation; 2) de-biasing the mechanisms to make trustworthy the internet; and 3) showing insights about a certain context to aid the search and discovery of information.
Given the complex and ever-changing nature of content creation and information dissemination, there are several directions for improvement. For example, users could be involved in providing anonymous feedback on the news itself and on the characterization of the evaluated articles, improving the overall performing skill of the tool, more so for those features that are less explored in the literature. In addition, this type of interaction makes the user think about important aspects of the online information, thus increasing awareness.
Over time, as users search for new URLs, the core data that feed the models will expand to cover larger and more diverse sets of domains, incorporating a richer perspective on news consumption. To help the above research directions, we plan to develop a user-friendly interface and evaluate the general user experience. Finally, a future challenge would be to scale the model for other languages starting from English.
References
Vosoughi, S., Roy, D., & Aral, S. The spread of true and false news online. Science, 359(6380), 1146-1151, 2018.
S. I. Manzoor, J. Singla, Nikita, Fake news detection using machine learning approaches: A systematic review, in: 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI), 2019, pp. 230–234.
P. Nakov, D. P. A. Corney, M. Hasanain, F. Alam, T. Elsayed, A. Barr’on-Cedeno, P. Papotti, S. Shaar, G. D. S. Martino, Automated fact-checking for assisting human fact-checkers, in: International Joint Conference on Artificial Intelligence, 2021.
G. Ruffo, A. Semeraro, A. Giachanou, P. Rosso, Studying fake news spreading, polarisation dynamics, and manipulation by bots: A tale of networks and language, Computer science review 47 (2023) 100531.
Marco Antonio Stranisci is a post-doc researcher in Computer Science in Turin and founder of aequa-tech.