The race to solve the problem of content authenticity in the age of generative AI has produced no shortage of proposed solutions. Blockchain-based provenance systems, NFT verification schemes, distributed ledgers, Merkle tree architectures. All promise immutable, decentralized trust infrastructure for visual content. Meanwhile, centralized commercial platforms offer proprietary verification services, paid authentication, and subscription-based provenance tracking.
These approaches share a fatal flaw that has nothing to do with their technical elegance: they gate trust behind commercial access. And in doing so, they create systems that, by their very design, cannot serve the function that trust infrastructure requires.
The Access Layer Problem
Proponents of distributed trust systems are correct about one thing: the technology does replicate how our trust web works. A properly functioning distributed ledger spreads trust across many nodes. You’re not trusting one entity. You’re trusting a consensus mechanism across hundreds or thousands of validators. Change the record on one node, and the others reject it. This is genuinely distributed trust.
The architecture mirrors the social graph. We trust people who, in turn, trust other people, who, in turn, trust other people. We rely on this web of trust to extend confidence to people and claims we’ve never directly encountered. If one node fails, the network persists. There are multiple paths, redundancy, and cross-validation. Blockchain, Merkle trees, and distributed ledgers all get this right at the technical layer.
The fatal flaw isn’t in the distributed architecture. It’s in how you access it.
To participate in that beautifully distributed network, you must pass through a centralized commercial gatekeeper. The company is selling you access to the system. You don’t interface directly with the distributed ledger. You interface with a company that provides access. That company controls your entry point through gas fees, subscription costs, or gateway charges. It sets your terms of service, determines whether your content gets verified, and can cut off your access at any time.
Layer 1 is distributed. Layer 0 (the access layer) is centralized. And Layer 0 controls everything.
The Volume Problem
Even if access were free, there’s a practical problem with what these systems propose to store. Most commercial provenance solutions want to keep a copy of the image along with all associated metadata as proof of authenticity. When you need to verify any image, you ping the database. It returns the original with the original caption and metadata. You have proof of origin along with ground truth.
Except that information is fluid. Metadata evolves. Captions get corrected. Context gets added. Rights information changes. Location data gets refined. This isn’t an edge case. It’s constant reality in news photography and archival work. The original caption might have a name wrong. Later investigation might reveal crucial context about when or where an image was captured. Copyright status changes hands. GPS coordinates need correction.
How do you edit an entry in an immutable ledger? You can’t update the record without invalidating the hash, which breaks the entire chain. You could add new entries, but now which one is authoritative? The original with wrong information, or the update? How does someone querying the ledger know which version to trust? Immutability becomes a bug, not a feature.
The scale makes this worse. Associated Press alone has over 60 million images in its archive and produces 1.3 million new images per year. That’s one wire service. Multiply across hundreds of wire services worldwide, medium and smaller agencies, stock photography libraries, and then countless independent photographers. Storage and continuous maintenance of servers, constantly adding new content while simultaneously responding to verification queries, would be massive.
Who pays for this infrastructure? The economics only work if you’re charging for access (back to the gatekeeper problem) or you have a business model that can sustain essentially infinite storage growth with infinite query capacity. Neither exists.
The Monopoly Problem
Whoever controls this database doesn’t just control the verification infrastructure. They control access to the canonical versions of images. They become the single source of truth not just for provenance, but for the content itself.
Need to verify an image? You query their database. Need the original? It’s in their database. Need context? Their metadata. Want to license it? Their terms.
This would create the biggest visual content monopoly in the world. A sort of private copyright office that also controls access to claims of authenticity and truth. And since they hold a monopoly on proof of provenance, they can charge whatever they want and decide who has access and who doesn’t.
What happens if you don’t pay your bills? They delete your content. It’s no longer verifiable. Your proof of authenticity disappears.
And finally, businesses also fail. The average startup lasts three to five years. What happens when the company goes bankrupt? Or gets acquired by a conglomerate that decides maintaining a provenance database isn’t profitable? Every image verified through that system becomes unverifiable. Decades of provenance records become buggy, unreliable, or disappear.
What Actually Persists
What infrastructure outlives individual businesses? Standards without gatekeepers. HTTP didn’t require paying Tim Berners-Lee. JPEG doesn’t stop working if the original committee dissolves. GPS functions because it’s a government infrastructure, not a startup.
Internet protocols managed through IETF and W3C have no single failure point. Open source projects with multiple implementations can’t be brought into obsolescence. Government-backed systems are supported by taxation rather than revenue models.
C2PA’s architecture follows this model. The Coalition for Content Provenance and Authenticity operates as a standards body with competing commercial interests. Adobe, Microsoft, camera manufacturers, and news organizations. They have an incentive to maintain interoperability but also compete with each other. No single entity can monetize the gateway.
There’s no payment required to participate. C2PA signatures can be created and validated by any software that implements the open standard. If Adobe disappeared tomorrow, C2PA signatures would still validate in other applications. The trust chain doesn’t depend on any single commercial entity’s survival.
The standard exists independently of any implementation. You don’t trust THE validator. You can verify through Adobe’s implementation, or Microsoft’s, or an open-source validator, or build your own. Multiple trust paths to the same verification.
But C2PA faces real adoption challenges. For it to work at scale, every camera manufacturer must embed it in their sensors. Every social media platform must preserve and display the signatures. Every editing application must maintain the provenance chain. This requires massive coordination across competing interests with no financial incentive to adopt quickly.
While the standard itself is solid, adoption remains challenging, driven by both economic and political factors. Which is why government regulation may be necessary. Legislation is beginning to mandate provenance standards for certain content categories, via the EU AI Act and similar initiatives worldwide. What the market won’t do voluntarily, regulation might force adoption. No fair? Not according to free market rules? Maybe, but when society’s ability to function is at stake, it needs to implement rules to preserve itself.
Even then, if hardware manufacturers gate C2PA-enabled sensors behind price premiums, if software vendors hide provenance features in subscription tiers, you get the same two-tier system by different means. The standard remains open, but access to implementation becomes commercialized. This is a risk the industry must actively resist.
What Trust Infrastructure Requires
Effective trust infrastructure requires universal accessibility. It requires incentives for broad inclusion, not competitive exclusion. Success must be measured by ubiquity, not scarcity.
Commercial models create opposite incentives. Artificial scarcity of trusted content raises the value of verified items. Exclusion protects market position. Universal free access eliminates the revenue model.
This leaves the trust infrastructure in a narrow space. It must be either a public good (government-funded, such as courts or weights and measures) or a coalition of competing interests in which no single actor can monetize the gate. Or both: public standards with private implementation but open access.
What it cannot be is a rent-seeking layer where trust itself becomes the product being sold. The moment you financialize trust, you make it temporary. You gate it behind unequal access. You align incentives toward extraction rather than universality. And you make it more fragile by making it dependent on a company’s survival.
The authenticity crisis in visual content is a real urgency. We need robust infrastructure for content provenance and verification. But that infrastructure must support universal access, persist long-term, and resist governance capture by any single entity.
Crypto and commercial solutions, whatever their technical merits, cannot provide those foundations. Not because the technology is bad, but because the business models are incompatible with the requirements. We’re trying to solve a public infrastructure problem with private market solutions. Maybe, this time, we can put greed aside.
Trust is not a competition. When we allow commercial entities to own the infrastructure of authenticity, we ensure that ‘truth’ only exists where it is profitable. For a society to function, trust must exist as a common-carrier protocol: a neutral, universal foundation that belongs to everyone and is controlled by no one.
