The promise is seductive: generate product photography in seconds, automate image editing at scale, produce marketing visuals without the overhead of traditional production. Generative AI tools for visual content have moved from experimental curiosity to operational necessity faster than most compliance frameworks can adapt.
If your organization is deploying generative AI for visual content, whether through dedicated tools, embedded features in your DAM system, or creative software, you need a clear-eyed view of the risks, obligations, and questions to ask before you scale. This guide maps the terrain.
The Human Authorship Threshold: When Do You Lose Copyright?
The most fundamental question for any enterprise using AI to generate or edit visual content: can you own what the machine produces?
The answer, across virtually every jurisdiction, is the same: human authorship is required for copyright protection. AI cannot be an author. The U.S. Copyright Office’s January 29, 2025 report reaffirmed that prompts alone, no matter how detailed, do not constitute sufficient human control to establish authorship. The D.C. Circuit Court upheld this position in Thaler v. Perlmutter on March 18, 2025.
Japan, South Korea, the EU, and India have all taken similar positions. South Korea’s June 2025 copyright registration guidelines explicitly require applicants to document and distinguish the human-created portions of any AI-assisted work.
What this means operationally: If you generate an image entirely through AI and use it in your marketing, you may have no copyright protection over that asset. A competitor could use it without consequence. If you want legal protection, you need to add a substantial human creative contribution, and you need to document what that contribution was.
However, the threshold of how much AI help is not yet precisely defined. The Copyright Office acknowledges that AI-assisted editing of human-created work (using Photoshop’s AI tools to remove a background, for instance) does not strip copyright from the original. But where the line falls between “AI as tool” and “AI as creator” remains case-by-case. Organizations should establish documentation practices now, creating audit trails of human creative decisions in any AI-assisted workflow.
Training Data Liability: The Risk Upstream
You did not scrape the training data. You did not build the model. But if the model you are using was trained on copyrighted content without authorization, you may be exposed.
This is not hypothetical. The Bartz v. Anthropic settlement in 2025, valued at $1.5 billion, arose from allegations that the company downloaded millions of pirated copies of books to train its models. The New York Times v. OpenAI litigation continues. Getty Images’ case against Stability AI focused explicitly on the appearance of Getty watermarks in generated outputs, a direct indicator of training data provenance.
The Copyright Office’s May 2025 report on generative AI training concluded that “some uses of copyrighted works for generative AI training will qualify as fair use, and some will not.” The courts are still deciding, case by case.
What this means operationally: Understand your vendor’s training data sourcing. Adobe explicitly markets Firefly as trained on licensed content and offers enterprise indemnification. Midjourney’s training practices are under legal scrutiny. Stability AI faces ongoing litigation. Your vendor selection is a risk management decision. Ask specifically: What is the provenance of your training data? What licensing agreements are in place? What happens if your training data is found to infringe? Even if the model’s training was lawful, the output might not be. AI image generators can and do produce content that infringes existing copyrights, trademarks, and trade dress, sometimes with startling fidelity. In June 2025, Disney and Universal filed suit against Midjourney, alleging that the service could be prompted to generate near-identical reproductions of characters, including Yoda, Bart Simpson, Iron Man, and Shrek. Warner Bros. filed a similar complaint in September. These cases argue both direct infringement (the model generates infringing outputs) and secondary infringement (the platform enables users to infringe). The implications extend beyond fictional characters. Design elements, product shapes, brand-associated visual styles, distinctive furniture silhouettes, the curve of a particular car, all of these can carry trademark or trade dress protection. An AI-generated product shot that inadvertently includes protected design elements creates liability exposure. What this means operationally: Implement review processes for AI-generated visual content. Do not assume that because you did not intentionally prompt for a protected element, the output is clean. Train your teams to recognize potential trademark and copyright issues. Consider whether your vendor offers tools to detect potentially infringing content, and whether those tools actually work. Vendor indemnification for AI-related IP claims has become a competitive differentiator. Microsoft, Adobe, and Anthropic offer enterprise indemnification under certain conditions. OpenAI offers limited indemnification for API, Team, and Enterprise customers. Midjourney offers none. But here is the gap most enterprise buyers overlook: the indemnification chain may not reach you. Consider a common scenario: Your organization uses a DAM platform that has integrated OpenAI’s API to power its image editing features. The contract between the DAM vendor and OpenAI may include indemnification. But your contract is with the DAM vendor, not with OpenAI. Does your DAM vendor pass through that indemnification to you? In most cases, the answer is no, or the coverage is significantly narrower than what the AI provider offers. Even when indemnification exists, the carve-outs matter. OpenAI’s indemnification does not apply if: (1) you “should have known” the output was infringing, (2) you disabled safety features, (3) you modified the output or combined it with other products, or (4) the claim involves trademark use in commerce. Each of these conditions significantly limits practical protection. What this means operationally: Review every contract in the chain. If your creative software, DAM system, or marketing platform embeds AI features, ask explicitly: Does your agreement with the AI provider include indemnification? If so, do you pass any of that protection through to your customers? What are the conditions and exclusions? Document the answers. The absence of protection is itself information you need for risk management. Disclosure requirements for AI-generated content are no longer theoretical. Multiple jurisdictions now mandate labeling, with more coming into force throughout 2026. European Union: Article 50 of the EU AI Act requires that AI-generated content be “marked in a machine-readable format and detectable as artificially generated or manipulated.” Deepfakes and synthetic media used for public-facing purposes must be labeled. These requirements become enforceable on August 2, 2026. The Commission published its first draft Code of Practice on marking and labeling in December 2025, with final guidance expected by May 2026. Penalties for non-compliance can reach 35 million euros or 7% of global revenue. California (US): AB 853, signed into law in October 2025, requires AI providers to embed latent disclosures (machine-readable metadata) in AI-generated content and offer users the option of visible manifest disclosures. The law becomes effective on August 2, 2026, aligned with the EU timeline. Starting January 2027, large online platforms must detect and display provenance data. Starting January 2028, camera manufacturers must offer provenance embedding for authentic content. South Korea: The AI Basic Act took effect on January 22, 2026. The June 2025 copyright registration guidelines require detailed documentation of AI use and human creative contribution for works seeking copyright protection. Japan: The AI Promotion Act, passed in May 2025, emphasizes transparency but takes a non-binding, soft-law approach. Japan’s copyright framework (Article 30-4) includes specific provisions for AI training under “non-enjoyment” conditions. The AI Guidelines for Business recommend developing “reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking” to enable users to identify AI-generated content. India: The November 2025 AI Governance Guidelines are non-binding but signal direction. Draft IT Rules would require visible labeling covering 10% of the display area for visual content and audible disclosure for 10% of the duration for audio content. Australia and Brazil: Both are actively developing frameworks. Australia has rejected a text-and-data-mining exception and is pursuing a paid licensing regime with transparency requirements. Brazil’s AI Bill, currently in the House, includes a dedicated copyright chapter with remuneration schemes for training data use. What this means operationally: If you operate across multiple jurisdictions, assume that disclosure requirements are coming. The EU and California timelines are fixed. Build processes now to track which content is AI-generated and ensure your systems can embed the required metadata. Consider C2PA-compliant tools and workflows as a de facto standard. The liability question runs both directions. If AI companies face consequences for training on unauthorized content, your assets may be at risk of being scraped for someone else’s model. In 2025, South Korea’s three major broadcasters sued Naver for allegedly using news content to train AI models without permission. The Korean Association of Newspapers has filed complaints against foreign AI companies, including OpenAI and Google. In Japan, the Yomiuri Shimbun filed suit against Perplexity. Australian creative industries successfully lobbied against a text-and-data-mining exception, arguing it would “legalize theft.” What this means operationally: Audit your exposure. Where are your visual assets published? What technical protections (robots.txt, AI-specific blocks, RSL) are in place? Do your terms of service address AI training use? Some platforms, such as Stability AI, have offered opt-out mechanisms. If your visual assets have commercial value, the absence of protective measures may itself create risk as licensing frameworks develop. The questions above are not yes-or-no compliance checkboxes. They represent a spectrum of organizational capability. The Authenticity & Content Provenance Maturity Model provides a framework for assessing where your organization stands and where it needs to go. At the lowest maturity levels, organizations have no systematic approach to content provenance. They cannot answer basic questions: Which of our published assets were generated by AI? What training data did the models we use source? What metadata is embedded in our outputs? These organizations face the highest regulatory and legal risk. At higher maturity levels, organizations have implemented systematic provenance tracking throughout their content lifecycle. They can demonstrate human authorship for copyright-protected work, document their vendors’ training data practices, review outputs for infringement risk, and embed required metadata before publication. These capabilities are not optional features. They are becoming baseline requirements for operating in regulated markets. Practical starting points: The landscape will continue shifting. The EU’s Code of Practice on AI content marking will be finalized by mid-2026. The Disney and Midjourney litigation will produce precedent. Australia’s licensing framework will take shape. The U.S. may see federal action. But the direction is clear: transparency, accountability, and provenance are becoming mandatory, not optional. Organizations that build these capabilities now will be positioned for compliance. Those who wait will be scrambling to retrofit systems and practices under regulatory pressure. The technology is moving fast. The law is catching up. Your operational practices need to be ahead of both.
Output Infringement: Trademarks, Copyright, and Trade Dress
The Indemnification Gap: Who Actually Has Your Back?
Mandatory Disclosure: The Global Regulatory Landscape
Protecting Your Own Assets: The Scraping Question
Where Should Your Organization Be? A Maturity Framework
What Comes Next
Paul Melcher is Managing Director of MelcherSystem LLC and Editor of Kaptur Magazine. He developed the Authenticity & Content Provenance Maturity Model and advises organizations on content authenticity strategy. He is not a legal professional; therefore, this article does not constitute legal advice. It is recommended that readers seek professional legal assistance. Main Photo by Giammarco Boscaro on Unsplash
Author: Paul Melcher
Paul Melcher is a highly influential and visionary leader in visual tech, with 20+ years of experience in licensing, tech innovation, and entrepreneurship. He is the Managing Director of MelcherSystem and has held executive roles at Corbis, Gamma Press, Stipple, and more. Melcher received a Digital Media Licensing Association Award and has been named among the “100 most influential individuals in American photography”
