Let’s assume you prefer eating healthy (OK, I realize we’re less than 48 hours after your Super Bowl snacks). Come to the rescue in most countries: you can check out the nutrition label or see if the food carries a Certified Organic label. If a vendor happens to sell their product without a nutrition label or has put a fake “organic” sticker on it when it isn’t, in fact, organic, a government agency could very well catch them, and they could face serious repercussions.
So, while not 100% foolproof, I have a fair degree of confidence whether the food I’m about to eat is more or less healthy or seemingly yummy but on the junk side of the spectrum.
I had to think about these food labeling regulations and enforcement policies when absorbing the rapidly fired recent announcements of major tech companies – no lesser ones than Google, Meta, and OpenAI – pledging to label AI-generated images that their platforms help to create or share as “AI-generated”.
Why label AI-generated images? In short, it aims to enable viewers of these images to determine that these images do not represent a credible and authentic representation of reality the way original photos do.
It’s important to stress that GenAI images are not necessarily intentionally misleading deepfakes: they could also be artistic or fun creations with no mal intent, or they could be imbedded as AI-generated backgrounds or objects in camera-captured product photos so that potential buyers can envision what these products could look like in specific settings.
For certain GenAI image use cases, knowing that these images were (at last partially) AI-generated is a “must know”; for others it doesn’t really matter or is simply a “nice to know”.
As the tech industry is highly competitive and its leadership is often not shy of thinking they (or their company) know best, it’s remarkable how fast and how broad the C2PA image provenance standards are gaining traction throughout the tech industry, also outside the traditional photo industry.
The fact that companies like Google, Meta, and OpenAI are showing their support tells us they “get it”: consumers, businesses, and – most importantly, governments – on both sides of the Atlantic demand protection from misleading mage deepfakes.
More specifically, (thanks to the relentless industry evangelizing by Adobe and others) they also “get it” that it’s vital to work together with other industry folks and be compatible with broadly endorsed industry image provenance standards and labeling guidelines.
Why does it matter that Google, Meta, and OpenAI will label GenAI images specifically? Let’s go back to the food labeling comparison. C2PA has its roots in being a standard for describing metadata related to the traditional source of an image, such as the camera, photographer, or stock agency. In other words, this type of metadata indicates the authenticity and credibility of the photo.
C2PA has been, first and foremost, akin to the Certified Organic sticker that tells you the image can be trusted to be an authentic photo.
With now also major creation platforms for AI-generated images coming on board that label the origin of an image as “AI-generated,” the C2PA labeling standard turns more into a comprehensive “food origin” label – like including both Certified Organic and Commercially Grown food labeling – with the intent to show info related to any image, whether this was a camera-captured and responsibly edited photo or (at least partially) AI-generated.
How well does it all work? There is a lot to be said about this topic, and we’ll have a panel of image authenticity experts take a dive deep into this at our Re-establishing Trust in Visuals Spotlight on April 10. in short, C2PA is off to a great start but still has a long way to go. For example, smart bad actors can strip the images of their metadata or resave them as screen captures without the metadata.
Images with C2PA metadata most likely will never be foolproof – but neither are spam filters or virus protection apps, while these are still much-appreciated vehicles to limit the threats or nuisances they tackle.
As C2PA keeps refining its standards, and more and more legitimate image creation developers are coming on board, we might eventually get to a situation in which we could encounter 3 types of takeaways based on how an image is labeled:
•View with extreme caution: Images labeled as AI-generated or substantially AI-enhanced: perhaps creative, fun, or informative images – but we need to be aware that they don’t necessarily reflect reality.
•View with serious caution: Images lacking any provenance labels. For now, the overwhelming majority, but their share will hopefully get significantly smaller over time.
•View with a baseline level of caution: Images labeled as camera-captured and coming from a trusted source – we can assume these photos represent reality (yes, camera-captured photos could have been misleadingly edited in Photoshop, which is why you also need to know the source of the image and draw your conclusions based on how much you trust that source).
Finally, while industry-wide image provenance labeling initiatives are vital for our photo & video industry to move forward, don’t rule out a nascent approach for which the jury is still out: AI-powered tools to detect whether an image was generated through AI – AI fighting AI.
So far, like image provenance labeling methods, AI image generation detection solutions are still far from perfect, but companies like previous Visual 1st presenters Hive and Intel Labs give an indication of what might be on the horizon.