In the realm of deepfakes, where the interplay of visual realism and information authenticity is critical, the concept of the uncanny valley takes on a new dimension. Just as robotics and CGI have grappled with the challenge of creating human-like appearances that are comfortable for observers, deepfake technology faces a similar hurdle – not just in visual deception but also in the believability of the information it presents.

The uncanny valley in information suggests that there is a threshold of credibility that must be carefully navigated. An image or video may look startlingly real, but if the accompanying information or narrative is too outlandish or implausible, it fails to convince. This is akin to encountering a robot that looks almost human but behaves oddly, triggering a sense of unease. For a deepfake to be truly convincing, it must not only visually deceive but also present a narrative that is close enough to reality to be believable. The subtlety of divergence from the truth is key; too far, and it falls into the uncanny valley, too close, and it risks being unremarkable.

Masahiro Mori’s classic illustration of the uncanny valley effect can be applied to our acceptance of new information.

Interestingly, this phenomenon implies that the most effective deepfakes might be those that deal with seemingly trivial or mundane subjects. The less extraordinary the claim, the more likely it is to pass through the uncanny valley of information without raising suspicion. This subtlety in deception is a double-edged sword; while it makes detection more challenging, it also confines the scope of believable deepfakes to the realms of the ordinary or the believable extraordinary.

So, while media and pundits focus on how realistic GenAI images and sound are becoming, they fail to consider the main hurdle of disinformation: making it believable. What is dangerous is not the advances made in visual technologies but rather the experience learned from repeated trial and error. As we have seen with the recent Taiwanese election- probably one of the highest targets of disinformation organizations– deepfakes are yet to make a real impact. But there are still over 50 elections worldwide left in 2024, thus much room for improvement.

Almost believable because trivial information : Fake photos generated by artificial intelligence software appear to show Pope Francis walking outside the Vatican in a designer coat

As we continue to advance in our ability to create realistic deepfakes, understanding the uncanny valley of information becomes crucial: It’s not just about creating a visually perfect forgery; it’s about crafting a narrative that is just believable enough to be true, yet slightly off-center from reality. This delicate balance is what makes a deepfake either seamlessly convincing or uncomfortably implausible.

Consequently, the most effective approach is not in tools that detect technical forgeries but rather in the education of the audience. A well-educated population is less likely to be fooled, regardless of the tools of deception used. In the end, as with any tool of deception, the ethical implications and potential for misuse remain a significant concern, reminding us of the importance of vigilance in the age of advanced digital manipulation.

Main image Photo by Hans Luiggi on Unsplash

Author: Paul Melcher

Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Stipple, and more. Melcher received a Digital Media Licensing Association Award and is a board member of Plus Coalition, Clippn, and Anthology, and has been named among the “100 most influential individuals in American photography”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.