They like the tool, but they don’t like the perceived demonization: After Instagram instituted a label on images to clearly identify those that were AI-generated, many photographers stood up in arms, angry that it also included images retouched using AI. “It is not the same,” they say. “Retouching,” they say, “is not generating, and our images should not be classified as such.”
Here is the underlying issue. A photograph labeled “AI-generated” is untrustworthy; it’s not real. It’s a fake. Photographers are angry that their images carry this label because it implies their images are fake and untrustworthy. The irony is that all of the photographers complaining are editing their images, thus altering what they captured, but yet refuse the label “not real”. Furthermore, they are using photography for commercial reasons: either to sell products or services or to promote themselves. This means that their objective is not to showcase reality as it is, but to sell it. For this, they set up, embellish, manipulate, retouch, and clean up every single one of the frames that they approve for publishing. All assisted by AI, by the way. Here again, they are not documenting reality; they manipulate it.
Ironically, the very photographers who manipulate reality through their craft are resistant to a label that acknowledges this manipulation.
And this is really what the conversation is about between AI and photography: Reality.
We had the same debate already 200 years ago when photography was invented. Painting was, for centuries, the most accurate depiction of reality (think portraits and landscapes) until it was not; photography was. Now, a new disruptor has entered the field, and once again, the debate is reopened. GenAI is not reality; photography is. Or is it?
For centuries, the camera obscura, in both its analog and digital forms, has been seen as the ultimate tool for capturing reality (even though it was in B/W for the first 100 years). And in theory, it isn’t entirely false. Replicating roughly how our eyes function—at least at the forefront—capturing rays of light reflecting on elements, it is very, very close to how we perceive the world. But, like our vision, it is also not very exact.
Reality is not limited to what we see.
Reality is a multi-sensory experience. It has sound, structure, dimension, movement, and smells. A photograph, on the other hand, only captures a portion of this: it’s static, cropped, flat, and two-dimensional. It’s taken from one specific angle, with a particular focal length, and a limited dynamic range. The film’s sensitivity, the photographer’s biases, and even the medium it’s displayed on further influence the final image. What we end up with is a narrow, subjective slice of reality.
Instead, we are looking at a testimony—someone’s vision of reality, as they interpret it using a camera obscura. And exactly like in a court of law, we give it value depending on the reputation of the witness.
Now, enters AI. It generates images through deep learning, mimicking how our brains recognize patterns. These AI systems are trained on vast datasets of real images, learning to understand the structures, textures, and styles that define various objects and scenes. From these learned patterns, AI can create entirely new images that convincingly resemble characteristics from the original dataset. In the end, we’re experiencing a variation of reality as interpreted by an algorithm mimicking the human mind.
The camera mimics our eyes, AI mimics our brains. The camera mimics how we receive the world, AI mimics how we perceive and interpret it.
In effect, a photograph generated by AI is no less real than one taken with a light-based camera.
This ability to generate convincing images raises questions about their authenticity and relationship to reality. It’s only because the event an AI-generated image represents events that never happened that we consider them fake. But it’s not the image that is fake; it’s only what it depicts (since it never happened).
But what if it does represent something that really happened and was generated using the testimony of those present at the time? What if the generated image is the sum of thousands of human testimonies from different angles and different moments via different perspectives and personalities, each with its own biases? Would that photograph be more real than one taken by a lonely photographer, from one point of view, at one unique moment?
Considering all the alteration done to reality – either via a light-based camera or an AI – does it matter what tool we use as long as what we depict has really happened?
So, in theory, a label should not be about what method was used to create the image – light-based or AI or any variation thereof – but whether what is depicted ever actually existed. Or, for lack of being able to verify this information, have a less binary classification of images, one that goes beyond “real” vs “fake.”
We are living in a time where traditional photography – and photographers along with it – needs to be redefined. Its role, its definition, its purpose all need to be revisited in light of this disruptive newcomer. If traditional photography is to be the instrument of reality, then it can no longer tolerate editing – especially when powered by AI – as an accepted, natural part of its process. It cannot live with a vaguely defined acceptable manipulation. It is raw capture, or nothing. Or, like its AI cousin, have a warning that, if edited, it’s an alteration of reality.
Author: Paul Melcher
Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Gamma Press, Stipple, and more. Melcher received a Digital Media Licensing Association Award and has been named among the “100 most influential individuals in American photography”
1 Comment