Site icon Kaptur

Protecting your content in the age of AI

A week ago, leading tech companies working on generative AI pledged to use their technical abilities to clearly mark content produced by their AI. While this move is a positive step, it only lightly touches on the broader problems of reliability and trust in the era of AI.

Generative AI poses a major threat to visual content. The issue isn’t simply that fake images could be mistaken for real ones. It’s actually more concerning that real images could lose their credibility. The former issue can lead to deception, while the latter can foster distrust. Distrust is particularly problematic because, unlike deception, which impacts a single event and can be rectified, distrust undermines the very foundation of a relationship and is essentially irreversible. As a result, none of your images, real or artificial, can be trusted.

In a world where everyone, from teenagers to Fortune 50 companies, communicates through images, this problem is crucial. It threatens not only our trust in the news but also our relationships with all kinds of brands.

Today, anyone can manipulate a corporate or marketing image into a tool for spreading misinformation with just a few AI-assisted clicks, leaving companies defenseless. Public companies are at the most significant risk of experiencing profound damages, as their stock market value is heavily influenced by shareholders’ perceptions. Tampering with existing product images or images of the CEO will soon become a significant means of negatively impacting their financial performance. We’ve proposed that Digital Asset Management (DAM) vendors adjust their tools to help companies build databases of verifiable truths, but there are other potential solutions for protecting visual content.

The stock market doesn’t react well when a brand is involved in unsavory activities. Photo by Oren Elbaz on Unsplash

Let’s look at some pros and cons of various methods to protect visual content from malicious manipulation:

1. C2PA/CAI (Content Authenticity Initiative and Coalition for Content Provenance and Authenticity)


   – Creates a standard framework for attribution, making it easier to identify original creators.

   – Works with existing metadata standards and Can work with other technologies, like blockchain or watermarking, to further enhance content authenticity

   – This initiative was created by a diverse and influential consortium, including tech giants like Adobe, Microsoft, Intel, Sony, media companies like BBC and New York Times, and camera manufacturers like Nikon and Canon, indicating strong industry support.

   – It provides a way for users to understand the full history of a digital asset and verifies its origin via an approved certificate.

– Can handle all types of digital files (text, sound, video), increasing its applicability.


   – Young solution: Successful adoption requires worldwide consensus among various players in the tech and media industry.

   – The process for creating approved certificates is not yet well-defined.

   – It hasn’t been integrated into any image editing and management tools beyond some Adobe products like Photoshop.

   – It’s still in early stages of development, thus requiring significant software development resources.

2. Crypto/Blockchain


   – Ensures content integrity by storing and verifying information about the digital asset in a decentralized manner.

   – Content creators can maintain ownership rights of their work, and the chain of custody can be easily tracked.

   – Can work in conjunction with initiatives like C2PA/CAI to further enhance content integrity.


   – High energy usage and carbon footprint associated with blockchain transactions.

   – Involves technical complexity and can be expensive to implement.

   – No industry standard in place currently, which can lead to interoperability issues.

3. Watermarking (invisible)


   – Easy to implement and helps directly associate content with its original and creator.

   – Can be used as proof of ownership.

   – Can help identify any modifications by comparing with the original file.

   – Can work in association with C2PA, Blockchain, and ISCC to ensure authenticity.


   – Requires linkage to an original public reference database for verification.

   – Relies on proprietary certification tied to specific vendors, limiting versatility.

Which one is the real ( non-synthetic) image?

4. ISCC (International Standard Content Code)


   – Provides a unique identifier for digital content, aiding in content tracking and rights management.

   – Facilitates detection of duplicate or very similar content.

   – The ISCC is on its way to becoming an ISO standard, which could increase its adoption and influence.

   – Can handle all types of digital files (text, sound, video), increasing its applicability. 


   – Might not prevent the initial misuse of content; it mainly helps in the aftermath.

   – Widespread adoption requires consensus among stakeholders.

   – No existing practical implementations besides the Liccium app, indicating it may take time to realize its full potential.

5. Image Matching


   – Can identify identical or near-identical images used in different contexts.

   – Can work in association with C2PA, Blockchain, Watermarking and ISCC to ensure authenticity.

   – Useful for tracking the spread and usage of an image.


   – Struggles to identify manipulated or heavily altered versions of an image.

   – Can yield false positives, identifying unrelated images as matches.

6. AI


   – Can be trained to detect subtle manipulations in images that may be invisible to the human eye.

   – Can automate the process of identifying and flagging manipulated content.


   – AI systems can be fooled or evaded by sophisticated manipulations.

   – Requires significant computational resources and technical expertise to implement effectively.

   – As of now, there are no viable AI solutions in the marketplace that effectively address the issue of image integrity.

None of these solutions provide a perfect out-of-the-box fix, and their appropriateness depends on specific requirements. Some are explicitly designed to address authenticity and trust issues, such as C2PA, but lack adoption, while others, like watermarking, are long-existing technologies but lack task-specific implementation.

However, the issue of image manipulation and consequent erosion of trust is only going to intensify, making the implementation of a solution more urgent. Companies must act now to protect their content integrity and build response mechanisms to mitigate the escalating impact of image manipulation and the erosion of trust in visual content.


Author: Paul Melcher

Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Stipple, and more. Melcher received a Digital Media Licensing Association Award and is a board member of Plus Coalition, Clippn, and Anthology, and has been named among the “100 most influential individuals in American photography”

Exit mobile version