At a a recent online conference, I said that we can “change the global Internet conversation for the better, by making it harder for liars to lie and easier for truth-tellers to be believed.” I was talking about media — images, video, audio. We can make it much easier to tell when media is faked and when it’s real. There’s work to do, but it’s straightforward stuff and we could get there soon. Here’s how.
The Nadia story · This is a vision of what success looks like.
Nadia lives in LA. She has a popular social-media account with a reputation for stylish pictures of urban life. She’s not terribly political, just a talented street photog. Her handle is “[email protected]”.
She’s in Venice Beach the afternoon of Sunday August 9, 2026, when federal agents take down a vendor selling cheap Asian ladies’ wear. She gets a great shot of an enforcer carrying away an armful of pretty dresses while two more bend the merchant over his countertop. None of the agents in the picture are in uniform, all are masked.
She signs into her “CoolTonesLA” account on hotpix.example
and drafts a post saying “Feds raid Venice
Beach”. When she uploads
the picture, there’s a pop-up asking “Sign this
image?” Nadia knows what this means, and selects “Yes”. By midnight her post has gone viral.
As a result of Nadia agreeing to “sign” the image, anyone who sees her post, whether in a browser or mobile app, also sees that little “Cr” badge in the photo’s top right corner. When they mouse over it, a little pop-up says something like:
The links point to Nadia’s feed and her instance’s home page. Following them can give the reader a feeling for what kind of person she is, the nature of her server, and the quality of her work. Most people are inclined to believe the photo is real.
Marco is a troublemaker. He grabs Nadia’s photo and posts it to his social-media account with the caption “Criminal illegals terrorize local business. Lock ’em up!” He’s not technical and doesn’t bother stripping the metadata. Since the picture is already signed, he doesn’t get the “Sign this picture?” prompt. Anyone who sees his post will see the “Cr” badge and mousing over it makes it pretty clear that it isn’t what he says it is. Commenters gleefully point this out. By the time Marco takes the post down, his credibility is damaged.
Maggie is a more technical troublemaker. She sees Marco’s post and likes it, strips the picture’s metadata, and reposts it. When she gets the “Sign this picture?” prompt, she says “No”, so it doesn’t get a “Cr” badge. Hostile commenters accuse her of posting a fake, saying “LOL badge-free zone”. It is less likely that her post will go viral.
Miko isn’t political but thinks the photo would be more dramatic if she Photoshopped it to add a harsh dystopian lighting effect. When she reposts her version, the “Cr” badge won’t be there because the pixels have changed.
Morris follows Maggie. He grabs the stripped picture and, when he posts it, says “Yes” to signing. In his post the image will show up with the “Cr” and credit it to him, with a “posted” timestamp later than Nadia’s initial post. Now, the picture’s believability will depend on Morris’s. Does he have a credible track record? Also, there’s a chance that someone will notice what Morris did and point out that he stole Nadia’s picture.
(In fact, I wouldn’t be surprised if people ran programs against the social-network firehose looking for media signed by more than one account, which would be easy to detect.)
That’s the Nadia story.
How it’s done · The rest of this piece explains in some detail how the Nadia story can be supported by technology that already exists, with a few adjustments. If jargon like “PKIX” and “TLS” and “Nginx” is foreign to you, you’re unlikely to enjoy the following. Before you go, please consider: Do you think making the Nadia story come true would be a good investment?
I’m not a really deep expert on all the bits and pieces, so it’s possible that I’ve got something wrong. Therefore, this blog piece will be a living document in that I’ll correct any convincingly-reported errors, with the goal that it accurately describes a realistic technical roadmap to the Nadia story.
By this time I’ve posted enough times about C2PA that I’m going to assume people know what it is and how it works. For my long, thorough explainer, see On C2PA. Or, check out the Content Credentials Web site.
Tl;dr: C2PA is a list of assertions about an media object, stored in its metadata, with a digital signature that includes the assertions and the bits of the picture or video.
This discussion assumes the use of C2PA and also an in-progress specification from the Creator Assertions Working Group (CAWG) called Identity Assertion.
Not all the pieces are quite ready to support the Nadia story. But there’s a clear path forward to closing each gap.
“Sign this picture?” · C2PA and CAWG specify many assertions that you can make about a piece of media. For now let’s focus just on what we need for provenance. When the media is uploaded to a social-network service, there are two facts that the server knows, absolutely and unambiguously: Who uploaded it (because they’ve had to sign in) and when it happened.
In the current state of
the specification drafts, “Who” is the cawg.social_media
property from the draft
Identity Assertion spec, section
8.1.2.5.1, and “When” is the c2pa.time-stamp
property from the
C2PA
specification, section 18.17.3.
I think these two are all you need for a big improvement in social network media provenance, so let’s stick with them.
What key? ·
Let’s go back to the Nadia story.
It needs the Who/When assertions to be digitally signed in a way that will convince a tech-savvy human or a PKIX validation
library that the signature could only have been applied by the server at hotpix.example
.
The C2PA people have been thinking about this. They are working on a Verified News Publishers List, to be maintained and managed by, uh, that’s not clear to me. The idea is that C2PA software would, when validating a digital signature, require that the PKIX cert is one of those on the Publishers List.
This isn’t going to work for a decentralized social network, which has tens of thousands of independent servers run by co-ops, academic departments, municipal governments, or just a gaggle of friends who kick in on Patreon. And anyhow, Fediverse instances don’t claim to be “News Publishers”, verified or not.
So what key can hotpix.example
sign with?
Fortunately, there’s already a keypair and PKIX certificate in place on every social-media server, the one it uses to
support TLS connections. The one at tbray.org
, that’s being used used right now to protect your interaction
with this blog, is in /etc/letsencrypt/live/
and the private key is obviously not generally readable.
That cert will contain the public key corresponding to the host’s private key, the cert's ancestry, and the host name.
It’s all that any PKIX library needs to verify that yes, this could only have been signed by
hotpix.example
. However, there will be objections.
Objection: “hotpix.example
is not a Verified News Publisher!” True enough, the C2PA validation libraries would
have to accept X.509 certs. Maybe they do already? Maybe this requires an extension of the current specs? In any
case, the software’s all open-source, could be forked if necessary.
Objection: “That cert was issued for the purpose of encrypting TLS connections, not for some weird photo provenance application. Look at the OID!” OK, but seriously, who cares? The math does what the math does, and it works.
Objection: “I have to be super-careful about protecting my private key and I don’t want to give a copy to the hippies running the social-media server.” I sympathize but, in most cases, social media is all that server’s doing.
Having said that, it would be great if there were extensions to Nginx and Apache httpd where you could request that they sign the assertions for you. Neither would be rocket science.
OK, so we sign Nadia’s Who/When assertions and her photo’s pixels with our host’s TLS key, and ship it off into the world. What’s next?
How to validate? · Verifying these assertions, in a Web or mobile app, is going to require a C2PA library to pick apart the assertions and a PKIX library for the signature check.
We already have c2pa-rs, Rust code with MIT and Apache licenses. Rust libraries can be called from some other programming languages but in the normal course of affairs I’d expect there soon to be native implementations. Once again, all these technologies are old as dirt, absolutely no rocket science required.
How about validating the signatures? I was initially puzzled about this one because, as a
programmer, certs only come into the picture when I do something like http.Get()
and the
library takes care of all that stuff. So I can’t speak from experience.
But I think the infrastructure is there. Here’s a Curl blogger praising Apple SecTrust. Over on Android, there’s X509ExtendedTrustManager. I assume Windows has something. And if all else fails, you could just download a trusted-roots file from the Curl or Android projects and refresh it every week or two.
What am I missing? · This feels a little too easy, something that could be done in months not years. Perhaps I’m oversimplifying. Having said that, I think the most important thing to get right is the scenarios, so we know what effect we want to achieve.
What do you think of the Nadia story?