← Back to the Toolbox

C2PA Content Credentials

A tamper-evident receipt for every piece of media.

An open standard, backed by Adobe, Microsoft, the BBC, Nikon, Leica and around 300 other members, that attaches a cryptographically signed manifest to a file. It records who made it, when, with what tool, and whether AI was involved.

Who it's for
Everyone.
Stance
Defensive / legal.
Cost
Free (Adobe Content Authenticity).
Requires cooperation?
Yes, AI companies must honour the Do Not Train flag.

What it actually does

When you sign a file with Content Credentials, the tool bundles a small JSON-like manifest into the file's metadata and signs it with a certificate. The manifest can state things like: "Captured on a Leica M11 on 12 March 2025. Edited in Photoshop. Generative Fill was not used. Do not train on this image."

If anyone later edits the file, the signature breaks: a verifier can tell. If someone strips the manifest entirely, soft bindings (perceptual fingerprints and invisible watermarks) can still link the file back to the original record. A journalist can prove their photo hasn't been tampered with. An artist can declare, in a standardised, machine-readable way, that they have not consented to AI training.

What it does for you

  • Lets you mark an image Do Not Train in a standard, machine-readable way.
  • Lets viewers verify a news photo really came from the photographer who claims it.
  • Survives screenshots via soft bindings (watermarks and fingerprints).
  • Free to use through Adobe's Content Authenticity tool.

What it doesn't do

  • It doesn't force anyone to honour the Do Not Train flag. Adoption by AI companies is voluntary today: OpenAI, Microsoft, Adobe and a handful of others respect it; most scrapers don't.
  • It doesn't hide your content from scrapers. It just labels it.
  • A bad actor who controls the full pipeline can strip metadata; soft bindings mitigate but don't eliminate this.

How to use it today

  1. Download Adobe Content Authenticity, it's free, and the public beta runs in a web browser plus as a Chrome extension.
  2. Sign in once with an Adobe ID (or any supported identity provider) so your credentials can be verified later.
  3. Drag in a photo, artwork, or video. Fill in the fields you want to attach: creator name, social handles, AI usage disclosure, and the Do Not Train flag.
  4. Export the signed file. Upload anywhere. Verifiers can inspect the credential at contentcredentials.org/verify.
Honest framing C2PA is a standard, not a law. It works the way seatbelts work: useful if everyone installs them, useless if manufacturers skip the hardware. The moment to attach credentials is before a file leaves your hands.

Limitations and criticisms

The C2PA ecosystem has received three main critiques from civil-society groups: centralisation around a small trust list of certificate authorities; privacy concerns when camera manufacturers embed signer identity by default; and the risk that absence of a credential gets read as "this is fake," penalising people who can't afford signing infrastructure.

The C2PA Technical Working Group's response, summarised in their 2025 FAQ, is that Content Credentials are always opt-in, that the credential records well-formed provenance rather than truth, and that the absence of a credential must never be treated as proof of forgery. That's the claim on paper; downstream platforms implement it with varying fidelity.

Further reading

← Back to the Toolbox