According to a recent study, over 63% of Americans say they often come across fake images online. Whether those images are truly faked or perceived as such, the damage is done. We are rapidly losing trust in what we see, our evolutionary number source of information. If this trend continues, photography and video will soon be seen as purely artistic mediums, alongside painting or sketching. Adobe, with its year-old initiative, the Content Authenticity Initiative, plans to combat this trend and recently displayed how.
Interestingly enough, Adobe did not rely on the industry’s historical mammoth ( Canon, Nikon, Leica,…) to showcase the practical application of its initiative but rather a partnership made of a start-up, TruePic ( which we introduced here) and a microchip manufacturer, Qualcomm. It’s not a mishap but rather a reflection of the current photography landscape: 85% of the trillion images uploaded online today are taken with a cellphone. Qualcomm provides chips to some of the most popular cellphones in the world, and the Truepic solution is mobile native.
Armed with an identified mobile camera, award-winning photographer Sara Naomi Lewkowicz took a series of pictures embedded with the new feature. The result is images that contain information on who produced the content and what alterations were done to it since its creation.
For those who would like to explore deeper, the “view more” opens up a verify page online displaying the previous version’s thumbnails. It also contains the signature of the capture device, which will need to be vetted by a yet established consortium to be accepted as trusted.
Also, participating in this soft-launched were Twitter and the New York Times, both co-founders of this initiative but still inactive on its implementation. For good reasons. While Adobe has been generously sharing demonstration of the Initiative capabilities, there is still a lot to happen before the general public consumes this. First and foremost is a wide enough industry acceptance and implementation.
Key to this initiative is the realization that technology will not solve the issue of trust in images. As we have seen, tools used to deceive evolve much faster than those built to catch them. Adobe is well aware of this issue, thanks to Photoshop, probably the number one image falsifying tool. Rather, knowing who created the image and what alterations have been done is enough to provide decision-making information to the viewer. In time, that decision could be made by an AI.
The CAI is not for everyone. The majority of photos will never need it. Your aunt Annie will not need it when you send her pictures of her nephew playing soccer. Nor would your friends on Facebook really doubt that those images you took of your latest vacation in Greece are fake. If they do, there is a deeper relationship issue to be solved here. It will not be a requirement for images to be published. The CAI will find its home in professional photography like photojournalism and documentary as well as a myriad of business applications like insurance or e-commerce. In time, it will make all photography more trustworthy as we learn to better understand the relationship between content, context, and motivation.
Author: Paul Melcher
Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Gamma Press, Stipple, and more. Melcher received a Digital Media Licensing Association Award and has been named among the “100 most influential individuals in American photography”
2 Comments