The real strength of deepfakes or alternate images is not that they falsify the truth – we have been able to that since the birth of photography but rather, thanks to social media, that they are completely dissociated from their original creators and thus their intent.
When the Soviet government released a documentary, we knew to be skeptical of its content as they were notoriously prone to propaganda or biased information.
Today, when we see a photo or video, we do not know its origin and thus cannot asses the intention of its creator. Mostly, was it made to deceive or to thoughtfully report?
Thus, one of the most potent tools against deep fakes is an indelible connection to its original creator so that we, as viewers, can be knowledgeable of his/her/its intentions.
Indelibly linking a creator to its creation would bring accountability and, in the process, dramatically reduce damages done by deep fakes. Intent could quickly be derived from knowledge of the source. Responsibility would always be assigned.
The current response to blame technology, and only technology, for evil actions, leads us to believe that the only reaction also lies in technology: Create an anti-Photoshop to detect photoshopped images, create deepfake recognition tools to expose fake videos. Granted, technology makes it easier to deceive, but at the core of the issue is the intent to deceive. And that is 100% a human characteristic. Moral responsibility is at stake here. No technology is evil until it is used with evil intent.
What technology can do is force those who have deceptive intent to be clearly identified, making their task that much more difficult.
If each video or image published had indelibly embedded its creator’s name, we could make an educated decision on the truthfulness of what we are seeing. An image from a photographer, with an exemplary history of never falsifying images, would be more credible than one from a staff photographer at Breitbart News. Videos from the New York Times would be more believable than ones from Fox News. Anything coming from a political candidate’s team would have to be taken with a grain of salt. And so on.
While a reliable confirmed source could always turn sour, it would pay a consequential price. Every image or video, past or future would immediately lose all credibility. It takes a long time to build confidence and trust, just seconds to lose it all.
Social media should decline to post any image/video whose creator is not clearly identified and verified. Publications should do the same ( they already mostly already do). Most of these tools exist already. Both photos and videos have metadata containing the creator’s information along with GPS coördinates and time of creation. Critical information to combat deepfakes. Unfortunately, it is not very widely used and very often automatically erased by publications and social media platforms alike.
Certified trust will soon become the most important value for visual content. Trust that images or videos are not altered in any way that could deceive the viewer. No fake images or montage, no biased alterations. By imposing accountability, the lines between deception and truthfulness would be clearly defined. And moral responsibility reestablish. That is the only way to combat deepfakes.
Author: Paul Melcher
Paul Melcher is the founder of Kaptur and Managing Director of Melcher System, a consultancy for visual technology firms. He is an entrepreneur, advisor, and consultant with a rich background in visual tech, content licensing, business strategy, and technology with more than 20 years experience in developing world-renowned photo-based companies with already two successful exits.