Generative AI has revolutionized content creation, allowing businesses to produce high-quality, personalized content at scale. However, with this technology comes a significant challenge – the trust issue. One of the main concerns with generative AI is that it may produce false or misleading content, which can harm the credibility of businesses and erode public trust.

Another concern is the potential for bias in generative AI. AI systems learn from existing data, and if this data is biased or contains errors, the AI may produce content that perpetuates these biases. It can lead to discrimination and other negative consequences.

A recent study led by the University of Warwick revealed that when people were faced with manipulated images of real-world scenes, they failed to spot around thirty-five percent of them. With now over 2 million “fake” or generated images being generated per day, the potential for harm is increasing exponentially. Still far from the 5 billion real images taken every day, but if adoption rates are any indication, it shouldn’t be long.

With this flow of new content and our natural inability to properly distinguish between real and fake, proactive measures must be implemented by the different parties involved without waiting for legislation to catch up.

A critical step is implementing ethical guidelines and standards for using AI in content creation. These guidelines should ensure that the AI system is transparent and explainable and does not produce false or misleading content. They should also ensure that the AI system’s content complies with the company’s values and guiding principles. Generative AI companies like Bria even go the extra mile by having a VP of Ethics on board and a clearly visible and defined policy.

A second option is identifying generative content via embedded metadata or/and readable invisible watermarks. Both provide the content consumer with clear information on the origin of the creation, its source, and the creator’s motivation. Solutions like the Content Authenticity Initiative and the C2PA, as well as existing technologies like Imatag invisible watermarking or TruePic‘s digital certification, are already establishing themselves as indispensable tools.

 

While there is nothing wrong with creating fabricated content via a text prompt, it is essential to continuously monitor the AI system’s performance and address any issues that arise promptly. This will help ensure that the system is working as intended and producing trustworthy and reliable, high-quality content. Disable, for example, its ability to recreate any existing person’s face.

Solving the trust issue with generative AI content requires a holistic approach involving careful data selection, ethical guidelines, identification tools, and ongoing monitoring. By taking these steps, businesses can build trust in generative AI and leverage its full potential for content creation.

 

Main  Photo by engin akyurt on Unsplash

Author: Paul Melcher

Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Stipple, and more. Melcher received a Digital Media Licensing Association Award and is a board member of Plus Coalition, Clippn, and Anthology, and has been named among the “100 most influential individuals in American photography”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.