On January 14, 2014, The Associated Press terminated its relationship with Pulitzer-winning photographer  Narciso Contreras. The reason? He purposely erased a video camera visible on the bottom left of an image of the Syrian war he submitted to AP for distribution.

When one looks at the image, the deletion doesn’t affect the information we get from this image in any way. Same man, same action position, same war. In fact, deletion helps us avoid distractions and better concentrate on essential information. Yet, this is unacceptable in terms of journalistic ethics. It falsifies reality. Furthermore, it cast continuous doubt on the whole work of this photographer, raising the question, if he did this for this image, what else did he alter in any of his other images? Worse, it cast doubt on the Associated Press as a trusted news organization: If one of their photographers erased content in one image, how many others did the same? 

The chain of trust is broken.

What was missing here was the ability of the viewers to make their own decisions. If the image without the camera had been distributed with the information that a voluntary edit had been done to make the image more easily readable, there would not be any issue. Because, along with the act of editing, there would have been two crucial pieces of information:

  1. Who did the edit?
  2. Why?

If we are informed on both of these, we can make an educated judgment on the trustability of the image. Now we know that a Pulitzer-winning photographer with no history of falsifying information decided to remove a disruptive object from the frame of one of his images to make it more readable. We would clearly know that there was no intent to deceive. This is what matters with news photography and all documentary photography. The intent. Because intent defines trustability. And trust is what we build civilization on.

Last week, three of the most powerful companies in the world, Meta, OpenAI, and Google, separately announced that they would start implementing and support a metadata framework called Content Credentials, established by the working group C2PA and spearheaded by the Content Authenticity Initiative. This is a defining moment in the history of photography akin to the inception of a new era, reminiscent of pivotal moments like the advent of autofocus or the transition from analog to digital. It is as important as the invention of photography itself.

What is it, and why is it so important?

It is a standard that aims to embed images with metadata detailing their origin and edit history. Such a mechanism enhances transparency, enabling viewers to verify the authenticity and integrity of visual content.

It is important because in a world overwhelmed with images composed or altered by AI, knowing by whom an image was created and what alteration it has seen will provide the viewer with a clear indication of intent and, thus, trust. Remember the age-old wisdom: ‘Seeing is Believing.’ Now, more than ever, this underscores the indispensable role of authentic photography in our lives. Without unambiguous insight into an image’s origins and intentions, we stand on the precipice of losing our foundational trust in visual evidence—rendering every image, even those genuine, suspect. And with it, our ability to make sound decisions about our world. Without it, this could lead to the extinction of photography as a medium for documentary evidence.

The creation of an image by AI is not inherently problematic, provided that it is transparently labeled as such. Similarly, photographers have the freedom to edit their images to any extent, given that these alterations are clearly disclosed. With transparency in both scenarios, viewers are empowered to assess the credibility and trustworthiness of what they observe.

Already, major camera manufacturers have adopted the standard. Leica already offers it in one of its cameras, while Canon, Sony, and Nikon have also publicly announced upcoming models that will be fully compatible. Chip manufacturer Qualcomm, which powers most of the non-Apple smartphones in the world, also announced compatibility.  Associated Press and Reuters have presented examples of their intended implementations, indicating they will fully adopt these practices themselves. Only media publications are slow to react for the time being, but with just reason: There are too few images containing the new metadata. But no worries, they will come. And pretty soon, any website that will not display content credentials on the photographs will look suspicious and untrustworthy.

Last week’s news of the triple adoption by the world’s top tech companies has truly cemented the initiative from a what-if to an official standard, transforming the photography landscape into a space where everyone will be able to recognize and identify real images from AI-generated ones and everything in between. And while deception will still exist, it will just be that harder to fool everyone- not with pictures, at least.

main photo by Brigitta Schneiter on Unsplash

Author: Paul Melcher

Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Stipple, and more. Melcher received a Digital Media Licensing Association Award and is a board member of Plus Coalition, Clippn, and Anthology, and has been named among the “100 most influential individuals in American photography”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.