Some speculate that overall fake news could cost the economy $39 billion a year. Quite a market to grab for a savvy tech startup, even at 1%! But while fake news and in particular deep fakes have been accused of wreaking havoc on minds and economy, there is surprisingly only a minimal amount of companies offering tools to combat them. The reason?

Cost

The most important, probably, is the challenge of the task. Deepfakes are notoriously extremely hard to detect and are evolving very quickly. Since they employ the same AI as used to identify them, any innovation in detecting them is immediately applied to improve them. A recent Facebook challenge with over a thousand entries placed the winner at a pathetic 65% detection success rate. No one would pay for a solution with such low results. Knowing that an effective solution would need to be continuously improved, it would make it R & D intensive, expensive, and commercially challenging.

Value

The cost of deepfakes/ manipulated media is calculated in lost credibility, not dollars, for now. Politicians and celebrities are the number one targets, and their notoriety quickly surfaces any odd behavior. Very quickly, deepfakes are debunked and, at worst, leave reputation scratches, if anything. Most celebrities/politicians might even welcome the attention profiting from the free exposure boost, even if negative ( cue in the “there is no such thing as bad publicity” adage).

A recent notorious deepfake is one of Richard Nixon delivering a speech he never made. The Nixon deepfake in question. © In The Event of Moon Disaster/YouTube
A recent notorious deepfake is one of President Richard Nixon delivering a speech he never made. The Nixon deepfake in question. © In The Event of Moon Disaster/YouTube

Market

The primary victims today are media companies and platforms. Both depend on credibility to maintain their audience. This is why Facebook, Twitter, along with Microsoft and top media publishers, are the only ones developing internal or industry-specific anti-manipulated media solutions. Powered by a combination of human and machine-based content monitoring and fact-checkers, those solutions are more public services and not commercial enterprises.

Sector

For a brand, stakes can be higher as beyond the reputation hit, damages can range from devalued stocks to boycotts and permanent loss of customers. And yet, there are currently massively underserved. Brands currently have little to no visibility on fake content affecting them and their impact. If they do, it is very often too late.

deepfake affect brands too
Brands can be hijacked and promote a false narrative without their knowledge.

Solutions

There are just a handful of companies that have decided to tackle the challenging issue of fake/manipulated visual media. Two main approaches dominate. The certification at the moment of capture and search engine/ content monitoring.

Point of capture certification

One approach is to create an indelible certification at the time of creation. By capturing content in a controlled setting ( like an app), along with every possible associated metadata, they generate a ground truth reference file. It can not only certify that the image/video has truly been taken when and where it says it was taken, but it also creates a certified reference file.

iPhone screenshot of Serelay interface
Verification at the point of capture. iPhone screenshot of Serelay interface

Serelay claims to capture over 500 data points along with the image or video and store these for later verification if needed. Partners of the company are guaranteed that the received images or videos are truthful. Likewise, Truepic has adopted a similar approach. A dedicated photo app and before transmission, a series of additional tests (like a google reverse image search) to guarantee authenticity and truthfulness. For added security, Truepic adds a cryptographic signature into a blockchain. While Serelay caters to news media companies, Truepic has a strong foothold in the insurance industry and its claim process.

Like Serelay and Truepic, Attestiv provides a point of capture certification app with a workflow suite aimed at the insurance industry. They also use blockchain to store unalterable information about the file.

Truepic leaves nothing out when checking the veracity of a file
Truepic leaves nothing out when checking the veracity of a file

Amber video provides a similar capture process as the above companies, but for video only. While filming, the platform generates “hashes” that are indelibly recorded on a public blockchain. For verification, the video footage is run through their algorithm, and if the hashes are different, there is proof of alteration. The company also claims to have a tool to identify deepfakes, regardless of the source, but offer little to no information on its efficiency level.

Content monitoring

Sensity offers a truly capture-independent solution. The company continuously monitors the web in search of deepfakes. They conducted extensive threat intelligence research to understand better the bad actors, how they are selling services or monetizing on fake videos, and how deepfakes spread online. The result is the first company that provides customers with the tools to detect and intercept these threats.

Similarly but more obscure for now, Falso.tech offers an API and SDK, both on private beta release. The company seems to focus exclusively on detecting whether a face actually belongs in a frame and drawing authenticity conclusion from there. How they plan to find suspicious video is unclear, as to whether they plan to expand to other content.

Content protection

While not exclusively a deepfake monitoring and detection tool, Imatag‘s invisible watermark can do both. Inserted at the pixel level of video frames or photographs, the invisible watermark can reveal if an image has been altered and what part. It is more flexible than a point of capture device since any content can be watermarked. It allows for a robust alteration detection that can be inserted at any point of a visual content’s existence. Used by news photo agencies and Fortune 500 brands, it continuously scouts the internet for both suspicious and legitimate copies.

Imatag watermark technology screenshot
Imatag’s invisible watermark is a powerful detection tool against manipulated media

Synthetic media for good

Not all deepfakes are evil. In fact, the technology can be used in many useful applications. First on the market Synthesia offers a customizable video builder that can speak up to 34 different languages while adapting face movement accordingly. One can use one of the predefine presenters or upload a video of themselves.

Synthesia screenshot
This person can sell whatever you want via Synthesia.

The result is a deepfake that can teach, sell, recommend, explain anything via a text entry, and a perfectly synchronized video as an exit. No need for expensive video equipment, studio rentals, expert staff, and multiple takes. And in case you were wondering, they monitor the usage of their solution and block any malicious attempts.

That is it for now

Like with any new field, it will take time for valid solutions to emerge. With deepfake, the issue is compounded by its ability to evolve outside traditional and controlled academic sectors. This makes building solutions extremely challenging and unstable. As well, companies are not yet determined on how to approach this new challenge and how much to invest. In other words, like the technology, the marketplace is far from being set. When it is, which will happen soon enough, the rewards could be impressive.

Author: Paul Melcher

Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Gamma Press, Stipple, and more. Melcher received a Digital Media Licensing Association Award and has been named among the “100 most influential individuals in American photography”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.