Deepfakes are incredibly detailed doctored video content made possible through artificial intelligence. They represent a dangerous threat in the fight against misinformation. As the altered clips get more convincing by the day, so too does the need for a tool that can spot them and help people tell the difference between fact and fiction. Computer scientists with the University of Buffalo have created an example of what that technology could look like after creating technology that can allegedly detect portrait-style deepfakes with an accuracy of 94% effectiveness. The technology works by checking tiny reflections in the eyes of a subject.
Deepfake footage is created by using deep learning algorithms to understand how a person moves and speaks to create fake videos of them that look incredibly realistic. While some people are using them for comedic effect, such as the video showing Jon Snow apologizing for the abysmal ending of Game of Thrones, not all of them are so benign.
Experts are becoming increasingly worried of the impact deepfake technology could have on democracy as it could see people creating doctored videos of politicians saying and doing things they hadn’t done – and never would. One instance of this in action saw a video of Nancy Pelosi allegedly stumbling over her words. The video was shared across social media, including by former president Donald Trump.
The research from the University of Buffalo was spearheded by Siwei Lyu, a computer scientist looking to create a deepfake detection tool by leveraging minor differences in eye reflections. When you look at something in real life, it is reflected in your eyes with the same shape and color.
Lyu explains how the technology works, saying, “The cornea is almost like a perfect semisphere and is very reflecting. So, anything that is coming to the eye with a light emitting from those sources will have an image on the cornea. The two eyes should have very similar reflective patterns because they’re seeing the same thing. It’s something that we typically don’t typically notice when we look at a face.”
This doesn’t apply to deepfakes, likely because of the different images used to create these videos. The team created technology that leverages this weakness in deepfakes. It maps the face, examines the eyes and the eyeballs, and then the light reflected in the eyes. The tool then compares differences in the reflected light to spot fakes. The tool so far has a 94% accuracy, but the team acknowledges there are some limitations in this approach. One of the key limitations is that these light differences can be fixed with further editing. Another flaw is that the tool requires a clear picture of the eyes.
If you’d like to learn more, you can find a paper outlining the research here.