Facebook is making it a priority these days to fight misinformation being spread on its social network, and the company has just announced that it’s now fact-checking photos and videos being shared by users.
27 third-party fact-checking partners in 17 countries around the world had already been tasked with reviewing articles over the past 2 years, but now their workload has been increased to verifying the truthfulness of visual content as well.
Facebook says that its first line of defense is artificial intelligence: machine learning is used to identify potentially false content using various “signals” — things like user feedback, text extraction (and comparison with news articles), and image manipulation detection.
Photos that have been flagged are then sent to the third-party fact-checkers.”
Many of our third-party fact-checking partners have expertise evaluating photos and videos and are trained in visual verification techniques, such as reverse image searching and analyzing image metadata, like when and where the photo or video was taken,” Facebook says. “Fact-checkers are able to assess the truth or falsity of a photo or video by combining these skills with other journalistic practices, like using research from experts, academics or government agencies.”
The conclusions of the human fact-checkers are then fed back into the AI to help improve its accuracy.
Types of false photos Facebook is trying to fight against include ones that are manipulated or fabricated, out of context, and false captions.
“People share millions of photos and videos on Facebook every day,” Facebook says. “We know that this kind of sharing is particularly compelling because it’s visual. That said, it also creates an easy opportunity for manipulation by bad actors.”