This is our most important practical program and a most difficult one. Factual information is essential if we are to have productive debates in a democracy. We do not have to agree with each other, but when we agree or disagree, at least we should have the confidence that we agree or disagree about factual information. Below are two videos that examine the importance of combating disinformation/misinformation in a democracy, followed by a number of research initiatives.
Sample view from academia. What is disinformation? What can be done about it? Why there are no constitutional limitations to moderating content on social media despite all the reports that the First Amendment says so; it doesn't say so.
Sample view from fact checking organizations. What can be practically done to combat disinformation? Why is it so difficult to moderate content? Why does fact checking still need people in the loop, why can't AI do it all?
(Side note: In addition to the articles below, if you are a software developer interested in experimenting with fact checking software,
you may want to consult papers with code.)
One can easily envisage a formal (mathematical) proof assistant that would output Yes/No on a given claim in a certain domain of knowledge. But Yes/No would not work, people need to see why, so explanations must be given, and they must be given in natural language.
On the other hand, neural networks have awesome performance for fact verification, but do it only in a black-box fashion, without explainability. So we need systems that combine these two techniques.
ProofVer is a fact verification system based on natural logic, with explainability and good performance. It is joint work between the University of Cambridge and Facebook.
As of now, these systems are only used to assist humans with fact verification. But time will come when they will do it all, without humans in the decision loop.
"It would be unreasonable to expect current artificial intelligence technologies to fully automate the fight against fake news. But there’s hope that the use of deep learning can help automate some of the steps of the fake news detection pipeline and augment the capabilities of human fact-checkers. In a paper presented at the 2019 NeurIPS AI conference, researchers at DarwinAI and Canada’s University of Waterloo presented an AI system that uses advanced language models to automate stance detection, an important first step toward identifying disinformation."
"Fact-checking has become increasingly important due to the speed with which both information and misinformation can spread in the modern media ecosystem. Therefore, researchers have been exploring how fact-checking can be automated, using techniques based on natural language processing, machine learning, knowledge representation, and databases to automatically predict the veracity of claims.
In this paper, we survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines. In this process, we present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts. Finally, we highlight challenges for future research."
"As deep neural networks are black-box models, their inner workings cannot be easily explained. At the same time, it is desirable to explain how they arrive at certain decisions, especially if they are to be used for decision making.
While this has been known for some time, the issues this raises have been exacerbated by models increasing in size, and by EU legislation requiring models to be used for decision making to provide explanations, and, very recently, by legislation requiring online platforms operating in the EU to provide transparent reporting on their services.Despite this, current solutions for explainability are still lacking in the area of fact checking.
This thesis presents my research on automatic fact checking, including claim check-worthiness detection, stance detection and veracity prediction. Its contributions go beyond fact checking, with the thesis proposing more general machine learning solutions for natural language processing in the area of learning with limited labelled data. Finally, the thesis presents some first solutions for explainable fact checking."
"In order to tackle the rise and spreading of fake news, automatic detection techniques have been researched building on artificial intelligence and machine learning. The recent achievements of deep learning techniques in complex natural language processing tasks, make them a promising solution for fake news detection too. This work proposes a novel hybrid deep learning model that combines convolutional and recurrent neural networks for fake news classification."
"In the era of click-to-share and automated bots, traditional fact checking by journalists is not a way
forward . The challenge has to be solved in an automated way to block the diffusion of misinformation
as early as possible.
Researchers have tried to use language features like ‘sentiment’ and ‘length of headline’ to identify fake news. However, such features are heavily dependent on training data, and do not generalize well. Given that most fake stories have incorrect information, though difficult to delineate, we propose to use fact-checking against knowledge graphs and more trusted source of information like Wikipedia to detect fake-news. "