New Leica M11-P Features C2PA content signing
Leica has just introduced the “-P” variant of their M11 camera. This new model adds the ability to add CAI / C2PA signatures to photos. This is the first production camera to reach the market with built-in Content Authenticity Initiative / Coalition for Content Provenance Authenticity capabilities. What does that mean? How does it work? Is it a big deal?
The CAI / C2PA process is a tool that proves your photo is real. You – or me, or anyone – can verify how a digitally signed photo has been altered during its life. It makes a signed image ‘tamper-evident’.
It’s real
UPDATED CONTENT
It “brings the receipts”, so to speak.
On a webpage, for instance, you’ll be able to click on an icon and know if an image has been fiddled with.
The C2PA process adds a blob of cryptographically signed metadata to your photos. Inside, there will be a digital signature, a (or some) thumbnail(s), a verbal description of what has been done to the picture at given points in its life, and a checksum of the whole of your image file. That checksum, when compared against the file later, will reveal if even one itty, bitty bit was changed.
Watch the video version of this post
If the C2PA trust chain is unbroken, that means that no change, no matter how tiny, has been made to the image without being recorded in the blob of C2PA metadata. If something happens that breaks the chain, all is not lost, you still have evidence of authenticity for everything that happened before the chain broke.
Important point: The C2PA process can give you reasonable confidence that a given photo is what it purports to be. That’s all it does. It can’t prevent the alteration of a picture. And the lack of a C2PA signature doesn’t mean that an image is fake. It can say, “Yes, you can trust this photo.” It can’t say, “No, you definitely can’t trust this photo.”
In addition to verbal descriptions of what has been done to the picture, you have those thumbnails. The camera records a picture – bang! Thumbnail. You tone the picture and add a caption – bang! Thumbnail. A picture editor crops the picture – bang! Thumbnail. ‘Future You’ — or anybody — can call up the authentication record and visually compare the camera-original picture to what’s in front of you and see how (or if) the image changed at each stage in its lifecycle. The thumbnails, by the way, are large enough to be pretty useful.
The C2PA data can also be stored in the cloud, separate from your picture file. That means this may be the durable metadata that can survive stripping that we’ve needed for so long!
You can’t tell it’s even there
In use, you won’t really experience any difference between working with a signed picture and an unsigned one. Your picture and caption are not encrypted. They’re not altered by the C2PA process at all. This bears repeating: The C2PA signature is the only encrypted part of the file. The photo and its normal metadata will behave exactly as normal.
Whatever software works with your photo now will work with signed photos. If you are working in a C2PA-enabled application, you’ll be able to re-sign the file when you are done editing to preserve the chain of authenticity. Again, you can use any old editing tools you want. But only C2PA-enabled ones let you preserve the signature chain.
The CAI / C2PA process is not like the dreadful “DRM” of the eighties and nineties. Digital Rights Management was an attempt to make media (music, mostly) not work unless a user paid for it. Whether a media file has or doesn’t have a C2PA signature doesn’t affect access to that file at all. C2PA can never lock you out of your pictures.
This is a long post. I’ve attempted to do a comprehensive job of explaining a complicated and technical subject. You’re embarking on a 25-minute read. Be forewarned. Grab a coffee and settle in.
Blockchained to a cloud
UPDATED CONTENT
Rather than drag a bunch of bulky thumbnails around with your picture file, you can choose to store the C2PA metadata in the cloud, in a “blockchain-like” way.
What we’re talking about is a special kind of database that you can add to but not edit or delete from. You can add a record, but not change it. Check in but never leave, as the Eagles said in song so memorably.
Think of a property record for a house. You want every change in the house’s ownership to be recorded forever. Enter a new deed in the ledger and it stays there permanently. You don’t want me, or anybody, altering your deed later to say that your house is my house. Same principle.
In addition to saving space on your hard drive and download time on your website, saving your picture’s C2PA credentials in the cloud means that if the C2PA metadata is stripped off your file or altered by a hacker, your picture can be matched back up with its original C2PA data through reverse image search. That should put a crimp in the bad guys’ plans.
IPTC Managing Director Brendan Quinn took me to task when this post was first published for being fast and loose in my use of the word “blockchain”. He makes two points.
The first is semantic confusion. There is no “official” definition of “blockchain”. The immutable, permanent, ledger we need can be accomplished in a number of different ways, any one of which could be a “blockchain” or not, depending on the method used and a given person’s definition. For example, by my definition — and Quinn’s — the only currently available C2PA ledger server, operated by Adobe, is not a blockchain.
His second point is that the word has a negative connotation to many people. Many folks are (unnecessarily) suspicious of the technology because of carbon-based shenanigans in the cryptocurrency market. “The C2PA team has been studiously avoiding the term up to now for that very reason.”, he writes. Points taken. No more B-word.
(This reminds me a lot of debating what makes a camera a “rangefinder”. Is it about the bright-line viewfinder? Or the way the thing sets focus? Is the Contax G-2 or the Fujifim X-Pro a rangerfinder? One doesn’t have the bright-line finder. Neither possesses the parallax-based, coincident rangefinder from which the type gets its name. But in shooting they both function like rangefinders. The Leica M11-P at the top of this article has both the special finder and the optomechanical focusing gizmo, by the way. In addition to C2PA signing.)
Content Authenticity Initiative
The Content Authenticity Initiative is a consortium founded by Adobe, The New York Times, and Twitter (Yes, Twitter) to develop an industry-standard method of authenticating visual content. Given who started the ball rolling, the early emphasis has been on fighting disinformation and fake news. But the implications are much broader.
The Content Authenticity Initiative kicked off with a public meeting in 2019. Speaking in a CAI community showcase in November, Leica’s Nico Koehler said that within a few days of the kickoff, two Postit notes appeared on the wall at Leica headquarters in Wetzlar, Germany. One said “authenticity” and the other, “CAI”. Three years later, we see — from Leica — the first piece of C2PA-equipped hardware that photographers (granted, well-healed ones) can benefit from.
The idea is not to develop a gizmo that creates honest content but to create an open standard that everybody who needs verifiable content can use or bake into their own gizmos. Think SSL for websites or the IPTC standard for descriptive metadata. Anybody who wants to build a gizmo that can caption a picture or deliver a website can follow the IPTC or SSL standard and make something that will work with anybody else’s gizmo or software or website, so long as both parties conform to the standard.
My understanding of how this works is that the CAI (Content Authenticity Initiative) members are building the technology. The C2PA (Coalition for Content Provenance and Authenticity) is the body created to be the conservators or administrators of the resulting standard. Like the IPTC is for the standards it administers.
As best I can determine, it’s correct to call it “C2PA metadata”, just like we do with IPTC metadata. So, that’s what I’ll do through this post. I’ll call each organization by its own name. But it’s “C2PA metadata”, C2PA-enabled” cameras, “C2PA signatures” and the like.
How does it work?
I obtained some sample images from the new Leica, thanks to the kindness of the wonderful folks at Leica Store Miami. I have been experimenting with those images to get a feel for the process, or at least the bits of it that are available at the moment.
Note! that we’re in early days here. I’m learning more about how this works with each passing day. At some point, I’ll have to press the “post” button on this blog. There’s a non-zero chance that some glaring error will yet come to light. If so, it will be entirely my fault. If you find a problem, please let me know. I’ll write a correction. In the meantime, you have been warned…
To enable C2PA signing on the M11-P, one simply goes in the menu, to the place where you can turn on the feature that writes your name and copyright statement to the Exif metadata of your photos. Switch a toggle there that enables the C2PA function. That’s it. That’s all you have to do.
The camera will still write those fields into your Exif metadata and now it will also write those values, along with the other items I mentioned earlier, into the C2PA record itself.
Moving along
Later in the workflow, as software like Photoshop makes changes more notations and more thumbnails will be added. Future You will be able to see a summary of what you did and compare certified copies of what the picture looked like at the beginning and end of each step in its journey through digital life. Your picture will be “tamper-evident”.
A composite illustration or one containing AI-generated elements can be signed. But all that stuff will be visible to anyone who cares to see what went into your picture.
Copyright information in its normal location will be protected by the checksum. But it also can be included in the C2PA data and cloud-based ledger. If your software supports doing so.
Content authenticity credentials and RAW files?
C2PA authentication can be added to just about any type of media. As in video and audio files as well as various photo file formats. The Leica will sign both JPEGs and DNG RAWs. But at this point, software in the rest of the workflow (well, Photoshop) can only verify and sign JPEGs. Support will come for RAW formats soon enough. For camera RAW formats that don’t accept embedded metadata there is a provision to make C2PA sidecar files, like the ones we use already for XMP data.
While there is no photographer-facing software at the moment that can even see C2PA data in DNGs or other RAW files, I was able to use the CAI’s own command line C2PAtool to read and write signatures in DNGs and make C2PA sidecar-equipped Nikon NEFs. It works.
The C2PA metadata block is pretty hefty. It can weigh in at about half a megabyte once it’s got a few thumbnails. That’s trivial in the context of out-of-camera files from a 60 megapixel camera. (The M11-P’s JPEGs are about 13 MP. Its RAW files weigh in at about 70MB)
But half a meg is pretty huge in the context of a 100-kilobyte image on a web page or even a one-megabyte file from a cellphone camera. What’s more, metadata that a bad actor can’t strip off an image file hasn’t been invented yet. Part of the C2PA’s goal was to make their metadata difficult to strip. But still.
You will have the option of storing your C2PA data on a cloud-based system and perhaps not keeping it embedded in the image file. That seems like a pretty good idea. At whatever step in the workflow that finds your file on an internet-connected platform, this becomes possible.
Again, saving your data to the cloud means a couple of good things. First, you save the expense of storing that great big blob of thumbnails and stuff on disk. Second, your signature and audit trail of thumbnails are now backed up. If a nasty person manages to strip your metadata, she or he starts having a bad day as soon as somebody takes your image to a verification tool and — lo and behold — there is your authenticity certificate! And your original copyright statement! Take that, forces of evil!
Processing in Photoshop (beta)
I took M11-P-produced images into the beta version of Photoshop, edited them, and exported re-signed versions.
In the Photoshop beta, you enable C2PA content credentials with a big button in a floating palette (from the main menu, go to Window > Content Credentials Beta). Then work as you normally do. You can’t tell that an image is C2PA-signed unless you look for the little Content Credentials icon in the title tab. When you are done, you export your signed image in the horrible Export As… dialog.
Starting with the fact that Export As… strips metadata, making it impossible to caption your image, and going from there, Export As… is a chamber of horrors. Surely there will be a better way as Photoshop moves from beta to a production version.
If you choose to save your authentication in the cloud, there is a URL in plain old XMP metadata that points to your picture’s manifest record. It’s in a tag called “Provenance” in the XMP metadata. There may be another pointer. I haven’t found it yet, though.
In Photoshop you can choose between saving your CAI metadata locally or on the cloud. One or the other but not both. The C2PA process does, however, support saving to both places in one go. C2PAtool will do it. Software developers who choose to offer this as a feature can do so.
Will it find a stolen image?
I saved authenticity info from an image to the cloud and then “stole” the image by copying its pixels and pasting them into a new blank document, thus stripping all of the image’s metadata. When I took the resulting “stolen” image to the Verify tool. Voila! Verify reconnected the image to its authenticity data.
Clearly, this depends on reverse image search in the cloud ledger. Depending on how beat up the image gets between you and the time somebody tries to verify it, the dependability of the match may be less than great. But one advantage of living in the era of AI is that reverse image search is good and getting better by the day. Imagine that. AI actually helping a photographer beat back an assault from the copyright-infringing Forces of Evil. What strange times we live in.
Exactly how the cloud ledger landscape will evolve is an open question. Who will operate the ledger(s)? How many will there be? Will they be confederated in some way? I have no idea. Stay tuned.
At a CAI Member’s Showcase event in November, two private blockchain operators talked about integrating C2PA into broader security solutions. We could see wonderful products built for specific use cases. We could see some chaos, too.
This is a good place to point out that it’s very early days here. Everything we are dealing with is brand new. The only photo software I know of that can do a signature is a beta version of Photoshop. The Leica is the very first commercial camera to be able to write authentication data. The CAI Verify tool is just a demonstration piece. None of these things are mature products. There is functionality missing from each piece of the chain.
More C2PA-capable cameras on the way
While I wrote this post, Sony announced that its new Alpha 9 III, which will ship in February 2024, will have C2PA capability. Nikon has announced that it will soon introduce a C2PA-capable camera. (The rumored Z9-H, I suppose.) Canon is thought to be developing one too. Photographers who shop for top-of-the-line photojournalism-oriented cameras will have C2PA-capable choices very soon.
Interestingly, while the Leica uses a chip separate from its main image processor to do C2PA signing, Sony has said that their new camera will do the C2PA work on the main processor chip. Clearly, Sony has in mind using a seriously powerful processor in that camera.
Does this suggest that one day camera manufacturers might offer C2PA signing as a firmware update for existing cameras? The very last paragraph of a joint Sony/Associated Press /Camera Bits press release says yes! Sony says that they will offer C2PA signing as a firmware upgrade in two popular existing cameras, the Alpha 1 and Alpha 7 S III, in early 2024. Whether there will be a big performance trade-off is an open question.
By all accounts, the cryptographic work involved in creating a signature consumes a lot of CPU cycles. Koehler, Leica’s Head of Product Experience, told attendees of that CAI members’ showcase that Leica’s first attempt at a software-only signing tool took 40 seconds per frame to do its job. Thus, the dedicated chip. The dedicated chip has security implications, as well. I’m sure Leica’s and Sony’s marketing departments will argue the relative merits.
I think I speak for most photographers when I say that I really only care that it works and that it be available to as many photographers as possible, as soon as possible.
Merriam-Webster’s Word of the Year for 2023: Authentic
Metadata! What about my metadata?
UPDATED CONTENT
I can hear readers of this blog practically screaming, “What about metadata? What does this mean for IPTC metadata on my images ?!”
Take a breath. IPTC metadata functionality is just fine. Metadata, of the IPTC sort, Exif, PRISM, or what-have-you, continues to live right where it has always been, in the same old data blocks. It works exactly the same way as always.
Changes to IPTC and other metadata “count” as changes to the image. Editing metadata will force a re-signing of the image. In that way, the CAI /C2PA process protects normal metadata.
That said, there is nothing preventing developers from copying select pieces of descriptive metadata to the cryptographically protected C2PA data. Or all of your metadata, for that matter. IPTC metadata does not take up a ton of space.
If a developer chooses to copy some or all of the IPTC metadata into C2PA data, that does not mean that C2PA data will take the place of IPTC (or Exif, or ICC, or any other) metadata.
C2PA data will not replace the function of other metadata!
In that case, a developer is still responsible for making sure that IPTC and other metadata works as it should.
IPTC metadata can be inserted into a C2PA record
There is an assertion defined in the C2PA standard that carries IPTC metadata. At the moment, we don’t have any workflow software capable of making that assertion. But we certainly have the possibility going forward. (An “assertion” is simply C2PA-speak for a statement encoded in the block of C2PA data. The C2PA data block that carries assertions is called a “manifest”.)
We should note here that the record of any given IPTC metadata that has been copied into a C2PA manifest won’t necessarily be in sync with the real, working IPTC metadata. Nor should it be. That’s the idea. Think of the manifest as a snapshot of some point in the past. The camera made a C2PA snapshot of the picture as it was when the camera shot it, which you can compare with how the image looks now. Your metadata editing software will make a snapshot of how your caption looked when you wrote it, which you can compare with how the caption looks now.
Your software “will” make such a snapshot only if you ask it to. Or your boss requires it to. Given that use cases and personal sensibilities vary, I can’t imagine a software developer not making this function optional. Assuming that any apps offer it at all. At this very early moment, none do.
Personally, while I cringe at the idea of website consumers reading my field-written, typo-laden caption, I think that original captions should be preserved in the CAI data.
It’s common, even for honest outlets, to use conceptual or “iconic” images, with often platitudinous captions that they hope will shed some light on the greater theme of a story. There’s usually nothing wrong with that. Unless there is. Either way, I think we owe it to readers to make available the original caption so they can see exactly what is shown in the image. C2PA has the potential to do that for us.
We have a long way to go
This would be a good place to talk about stuff that doesn’t quite work yet. Yes, you can take my sample files (available from my OneDrive here…) and the Photoshop beta and successfully (more or less) do the first couple of steps in a workflow. But we don’t have the tools to do practical work yet. It’s going to be a while before you can open that end-to-end verified picture agency.
In Photoshop
The biggest early stumbling block I found is that Adobe used the reprehensible metadata-stripping Export As… function to export signed images. That means that there is no way to caption an image and keep the signing chain unbroken. If you caption your image first and then process it in Photoshop, Photoshop will simply strip off your caption. If you export your image from Photoshop first and then caption it, in say, Photo Mechanic, you’ll break the authenticity chain. (Using a plain old non-C2PA enabled copy of Photo Mechanic, that is. At the moment, there is no software available at all — as in not ExifTool, not any of the programs I cover here, not even C2PAtool, nada, zip — that can add a caption without breaking the authenticity chain.)
So, using the Photoshop beta… no caption. That pretty much stops a photojournalism workflow dead in its tracks.
Between the captioning problem and a lack of support for RAW files, basically any real-world workflow is stonewalled. What we have now in the Photoshop beta is a demo. It’s not a beta in the usual sense of software that’s almost but not quite ready for prime time.
Adobe can, and will I certainly hope, quickly fix this. I get it. Export As… was expedient for the folks writing beta code. But it needs to be fixed before Photoshop is practically usable in a C2PA workflow. There is a chorus of voices, including mine as well as industry heavyweights, howling at Adobe for a fix.
In Photo Mechanic
A professional photojournalism workflow depends on Photo Mechanic. Camera Bits President Dennis Walker tells me that performing the task of editing metadata on a file without compromising the security of the authenticity chain is more difficult than it appeared at the beginning of the CAI/C2PA journey. A lot more difficult, in fact. Dennis says that he has now devised a means of accomplishing the task. He adds that the tests referenced in that Sony press release were performed by the Associated Press using a prototype of a C2PA-enabled version of Photo Mechanic. So we can safely assume that help is on the way. However, right now there is no specific timeline.
In the cloud
UPDATED CONTENT
One of the most exciting promises of the C2PA process is the ability to store our C2PA data in the cloud. Cloud storage in a, ahem, blockchain-like environment means that we have a durable record of our data. That’s hugely important if we are to have the durable metadata we all have wanted for so long. Right now, we only have a single proprietary system, operated by Adobe. That’s great for demo purposes. But in the real world, we’ll need a robust, distributed system, operated by a trusted organization. Or probably many such systems, oriented to the needs of different use cases. And we’ll need to come to some sort of consensus about which one to use in a given industry. Maybe the C2PA or another group will “bless” particular systems. All of this is yet to be worked out. The whole matter of cloud storage is still very much up in the air. (Ooof. Blog of terrible puns. I know.) It’s pretty clear that storing, or at least backing up, C2PA data in the cloud is going to be important to see the benefits we hope for.
In the standard
The verbal description of what was done to an image might need some work. Image editing software writes assertions (statements) into the C2PA record describing what it did to the image. Look at the screenshots below showing Photoshop’s C2PA preview for an image that I edited. The first screenie shows how Photoshop described a perfectly-acceptable-for-journalistic-purposes local brightness change.
The second shows what Photoshop wrote about the same image after I crudely cloned its subject into two subjects. Which is very much not acceptable. (For news reporting. It could be fine in other circumstances. YMMV) “Drawing edits” apparently covers an awful lot of ground. Would you be able to discern the difference between the two assertions? Would the average reader of a news site? To be fair, we have the thumbnails to guide us. However, more clarity might be a very good idea.
Or — on the other hand — maybe I have this backward. Maybe we need less verbosity, not more. Maybe it’s all about the thumbnails.
This will be something for app developers, publishers, and end users to work out. Once we have some end users, that is.
UPDATED CONTENT
Since writing this section, I have run headlong into a case that makes me think that verbal descriptions won’t ever be terribly helpful at all. I can’t go into details about the case at hand, but the crux of it is that an actor manipulated an image to change its meaning by using only global tone and color controls. Think of one of the cases when certain media outlets have found crime suspects insufficiently intimidating-looking so they tone the guy’s mugshot to make his skin look darker. Same idea, applied to evidentiary photos. C2PA thumbnails would have stopped this maneuver cold. But what would the verbal description say? “The same tools and techniques used in every legitimate picture you’ve ever seen were used to falsify this one”? That’s a judgment that’s outside the scope of C2PA, let alone that it take a pile of AI to even attempt it.
Put me on Team Very Sparse. Like, very sparse.
I should say this now: ALL photos are altered/edited. It’s fundamentally necessary for somebody, whether human or robot, to take the nonsense that a camera captures and turn it into something that represents reality. We have to make that clear. It is imperative that we not allow the C2PA process to cast the impression that AI is honest and people aren’t. We do not want honest people’s transparency to be met with prejudice. I’ll say it again. “The camera doesn’t lie” is BS. Honest photos are edited.
In the camera
The Leica camera imposes an impossibly short character limit on the creator/artist and copyright strings it lets you write on your files.
I advocate that you should follow both PLUS Coalition founder Jeff Sedlik’s advice to always write your copyright statement — even if it’s in metadata — in the legally appropriate form as well as follow my own advice to always include contact information in your metadata copyright statement. Either of which takes a bit of space.
Heck on the M11, if you’ve got more than a two-syllable name, your name alone won’t fit. Please, Leica, help a brother out here.
By the way, if you do have a Leica… Leica calls the copyright field “Info”. Confused the heck out of me. So much so that, in the Leica store, we just left the placeholder text in place. Later, as you can see in the screenshot, the ExifTool output solved the mystery. Now you know.
Exif metadata in the C2PA record
If you look at the ExifTool output above, you can see that the Leica camera copies a fair chunk of Exif data into a C2PA assertion. The Verify tool doesn’t expose all of that information — yet. Significantly, we can see that it doesn’t show us the copyright field, only the creator field (“Artist” field in Exif). The C2PA specification does allow the contents of virtually any Exif field to be asserted.
If copyright information only exists in the Exif, we should copy it into the C2PA data and expose it. Later, when the valid copyright information is in the IPTC, we should copy that to the C2PA data, too. Regardless of where it’s copied from, I strongly recommend that we add all iterations of it to the C2PA manifest and that software like Verify make them available to anyone checking the provenance of an image.
It’s about time
The matter of timestamps is interesting. The camera records the Exif timestamp to the CAI/C2PA data. Based on the camera’s time, obviously. Good. The timestamp is very useful information. But is it correct?
Readers of this blog know that camera clocks can be out of whack for any number of reasons. Wrong time zones, not adjusting for daylight saving time, or being set for sometime in 1970 can all set things askew.
Hmmm. Check that, a timestamp is more than “ useful information”. It can be critical.
In the weeks before I wrote this, there was a kerfluffle over exactly when some pictures of the Hamas attack on Israel were shot. Some self-appointed protectors of the public morality accused several photographers of embedding with terrorists. The matter came down to exactly when some photos were shot. Reputable news organizations were able to determine that the accusations were without merit. But still, we need signed timestamps!
(Yes, at some level, all journalistic organizations are self-appointed protectors of the public’s welfare. That said, it’s a heck of a lot easier to make accusations than it is to make good photos. When we accuse, we have an obligation to be right.)
Signed pictures need to be signed with the right time. The idea of some sort of self-powered clock that could only be adjusted by a secure time server has been mooted. That would be cooler than it would be easy to do though, I’m afraid.
Mobile phones, on the other hand, have to know the precise correct time to even work. So, no problem there.
A workaround: Until somebody comes up with a genius answer to the camera clock issue, we can use a C2PA-capable phone (when such is available), to just make a “notes” picture with our mobile if we need to establish when we shot something.
By the way …. Please check the time in your cameras!
Every image, every day?
Do we have to have authentication for every photo published to make a difference? No. Will we ever have every picture verified? No.
I do expect legitimate publishers to make a show of having authenticated images as soon as they possibly can. Not all images can be authenticated, of course, but the more times a reader sees C2PA badges on a publication’s pages, the stronger the impression that the publication cares about accuracy and the stronger its value proposition.
Readers won’t click through to check the credentials for every single picture. After a while, they won’t feel the need. The idea that plenty of images can be checked will be enough. Readers will come to trust that the publication they are paying good money for isn’t pawning off misinformation. Trust is what C2PA is about. Trust, in addition to keeping our society from falling apart, is valuable. It’s good business.
While a publication won’t be able to have from-the-camera authenticity chains for every picture, C2PA authentication can start at any point in a workflow. Once a picture editor has worked on a photo, he or she can sign it, certifying that, to the best of their ability, that they have determined that the image is legit.
That way, a publication could sign every picture they publish. You could be able to check any picture you see on social media and see if it came from a legitimate publication.
Even individuals
A photographer, like, say … me, who can’t afford to buy all new cameras could still sign photos at the editing stage to protect them from that point forward. I plan to do just that. A valid C2PA signature chain, even if it doesn’t go all the way back to the camera, will still go a long way toward proving who the photographer or copyright holder is and what organization published or distributed the picture. We have all pined for a durable form of metadata to protect our pictures. C2PA has just pushed us closer to that goal. And I don’t even have to (get to) buy new gear!
Social media users and political activists and other actors of both the good and bad varieties all try to legitimize themselves by claiming that “The Daily Bugle just published… whatever.” Well, the rubber is about to hit the road. The Bugle did or it didn’t. Put up or shut up. Is the picture signed or not? Real, or faked?
What if a given social media outlet strips metadata (as we know most currently do) and doesn’t support C2PA? That would pretty well prove that platform can’t be trusted, wouldn’t it? C2PA could incentivize media businesses to get on the right side of the whole trust issue. Let’s hope!
Right now, verifying that an image really was published by whatever organization that somebody on social media claims it was takes some effort. Post-C2PA, it will be dead easy.
Is it foolproof? Can it be hacked?
First off, we need to remember that C2PA signing only makes an asset tamper-evident. It doesn’t mean to make it impossible to gin up fake content, only to make it harder to pass off fake crap as real.
Of course, bad actors can attack everything.
Off the top of my head, I can see two approaches to hack a C2PA signature. One will be very difficult, will rarely be attempted, even more rarely be successful, and we’ll all get our undies in a bunch about it, because that’s just how we roll on security. The other, well…
The fancy way
In this type of attack, somebody attempts to somehow forge C2PA credentials on false content, be it retouched photos to defraud an insurance company, or some AI-authored compromat to use in an important political campaign. This would be your wild-eyed, Red Bull-chugging hacker breaking through the encryption process to spoof the authentication somehow. Or maybe the man-in-the-middle version, where an attacker intercepts an image, alters it, and tries to forge a signature to make his fakery more convincing.
There better be a lot of money at stake, because that won’t be quick or cheap or easy. If it turns out to be possible at all. In theory, it’s possible to hack anything. In practice, if you make it difficult enough, the bad actors will rarely invest the resources to do the job and everybody lives a happy life. One of the aims of the C2PA process is to make forgery difficult.
UPDATED CONTENT
(As I re-edit this post, we’ve seen a potentially cheap and easy attack against the signing certificate process that the demo CAI Verify doesn’t/didn’t catch. “Potentially cheap and easy” is relative. Easy for sophisticated hackers, not easy for some fake news clown on social media. At this moment, the exploit has been identified and the hole is partially closed. Probably, by the time you read this, it will be closed for good. )
The real way
The other way will be dead easy. Your fake content doesn’t have C2PA credential? No problem. Just lie and say it does. Lying’s easy.
Here, a scumbag fake news publisher just puts authentic-looking C2PA icons next to their fake pictures and hopes that’s enough to sell their con to their chosen audience. Or maybe they go the extra mile and gin up a reasonable-looking but cheap fake authentication page and link it to that fake icon.
On the one hand, this attack is plausible enough (heck, inevitable) to really worry me.
It would work. Remember that the people who fall for fakes want badly to believe the lies they are being told. That’s how con games and conspiracy theories work. The pigeon is manipulated to want to believe that, for instance, [fill in a war] didn’t actually happen; it was filmed by “crisis actors” in [fill in a place*] The pigeons feel like big shots because they “know” the inside skinny. And they really, really want to inflict their new-found insight/lie on others. That makes them feel like even more of a big shot. They won’t bust their tails using C2PA or any other tools to check the veracity of whatever garbage they are fed.
But somebody will check. C2PA is a pin that can be used to pop a disinformation balloon. (Or lance the boil, if you prefer). We don’t have to humiliate every lying gasbag, just enough of them to make the rest think twice. Color me fairly optimistic.
(* Lebanon, the country, and Ohio, the US state, in the example I’m thinking of, if you were wondering.)
In a section above, I talk about an attack that was done by manipulating pictures using only the same global adjustments that are used on all images. Now, the C2PA thumbnail would have revealed that one. But it still speaks to my point here. The real threats are simple things, not fancy things.
What’s it all for?
The developers of the C2PA standard are first concerned with disinformation in media. That’s a worthy goal and an obvious market for the technology. But there are others.
C2PA, as a (potential) industry-wide standard, means that everybody can make signed photos that are interoperable with everybody else’s signed photos.
Nowadays, photos are used, by nearly everybody, as evidence — of all kinds of stuff. Filing an insurance claim? Submit a photo. That vase was in one piece when you shipped it? You have a photo that says so.
The contractor who just built a fence for me shot mobile phone pictures of the finished job. The city inspector will do the same tomorrow. So will I. If the fence blows down someday, everybody involved will depend on photos to prove their side of the story. I want mine to have the advantage of being cryptographically verifiable. As will the contractor and the inspector eventually. We’ll all want C2PA signing in our next cellphones.
While I was working on this post, I saw this video by YouTuber Thomas Heaton about photographers who have been found (or accused of) cheating in photo competitions Real? Or Photoshop? (Or AI?) On one side or the other in each of the disputes in Heaton’s video, somebody’s bacon would have been saved if the pictures in question had C2PA signatures.
Several of Heaton’s examples were wildlife photos. Wildlife photographers buy top-of-the-line cameras and lenses. My official prediction is that they will demand C2PA-capable gear and software sooner rather than later. Gear and software makers with a lick of sense will be scurrying to be able to accommodate them.
We’ll find tons of uses for content authenticity reassurance and wonder how we ever got along with it.
Will it work?
Can this technology single-handedly stop disinformation, fraud, and colorectal cancer?
No, of course not. But it seems like it could be a good tool. We shouldn’t denigrate a new technology that may help just because it doesn’t solve all of our problems all by itself. You can’t build a whole house with just a hammer. But you can’t build a house without a hammer or similar tool. C2PA-verified authenticity is one tool, like a hammer.
With only one C2PA-capable camera that scarcely anybody can afford and no working software on the market at this moment, it seems like we face a very long climb up a very tall mountain to even make a dent.
We could make a dent sooner than we might think. We know that more journalist-oriented cameras are in the pipeline. Big players are lining up to use them. With industry giant AP on board, it’s reasonable to expect other photo agencies like Reuters, Getty, Zuma, and Bloomberg to follow suit. Publishers have a lot to gain. They’ll jump in as soon as they can. Even a tiny shift toward trust in the organizations we depend on to make democracy work will feel like a sea change.
Help tip the scales
The CAI says on its website that not having a C2PA credential does not mean that a given photo is fake. That’s true. This new process is not meant to work by stopping dishonest people. It’s meant to empower honest people — something I’ve evangelized a lot in this blog. If enough photographers and enough news outlets – not all of either, but enough – choose to use this tool or one like it, that might help restore the balance.
Could one feature in one stratospherically priced camera mark a first step toward saving our democracy? What do you think? Is this a metadata milestone? Or just a flash in the pan? Let us know in the comments.
Disclaimer: I am a member of the CAI. That doesn’t mean I’ve sworn a loyalty oath to them or anything. My conclusions and errors are both mine alone.
Corrections and errata — As I become aware of errors, I will correct the affected sections of this post. I’ll put orange “UPDATED CONTENT” flags on sections that have seen substantive updates. To the extent that it’s practical, I’ll briefly list here errors that I’ve fixed and what I’ve done to fix them.
Update of 1-7-24. I had erroneously said that the C2PA manifest was encrypted. It’s not. I have corrected to say that it’s signed. Also tightened up my use of the word “blockchain”.
Update of 1-11-24. Explained explicitly that C2PA metadata does not take the place of other standard-compliant metadata. Also replaced the screenshot of C2PA data as seen by EcifTool with a better version.
In case it matters to you,
“Three years later, we see — from Leica — the first piece of C2PA-equipped hardware that photographers (granted, well-healed ones) can benefit from.”
well-healed = well- heeled i.e rich people with costly pristine footwear unlike the rest of us peasants with broken down worn out shoes.
Now now. Lots of starving artist photographers use Leicas. That said, your etymology is literally (ouch) spot on. Go to a gathering of people who own Leicas. The occasional professional photographer excepted, you will see some mighty fine footwear!
Thank you Carl for pointing out all the ugly details in such a wonderful nonchalant way! Unfortunately the details are necessary if we want to move forward, and this is a nice springboard to cuss and discuss all the bits.
Your point that you don’t need a C2PA-compliant camera (e.g. Leica M11-P) to digitally sign photos is right on track. Hint, get your signing certificates now before the price goes up.
Get a signing cert (Public/Private key pair) from whom? The advantage of the key being embedded in the camera is that the user doesn’t have to acquire one and the CA (certificate authority) is presumably the manufacturer (Leica) or Adobe, though it could in theory be any CA based on my understanding.
Acquiring your own cert might allow you to retroactively sign your archives. In essence, putting an accountability and integrity stamp on the state of your archives. I’m not aware of a tool that would easily allow for that yet.
The certificate I was talking about would be to allow you to sign your work after you’ve edited it, say in Photoshop. Which you would have to do to keep the chain intact when you turn in your work.
I just paid my yearly Adobe bill. Maybe for that kind of money they could include a coupon for a cert?
I’m not sure which vulnerability you’re talking about Carl, but in the “fancy” attacks realm we often say that “physical access is root access”, meaning having control of the device implies full control over it. Expanding on this concept, my concerns are how the signing keys are generated and stored in the camera. In the past, we’ve seen lots of key material being hardcoded into the firmware which required a simple(relatively) disassembly of the firmware binary to extract the key(s). The signing keys are the holy grail. For a more advanced attack, an attacker may discover a way into an interactive maintenance mode to extract firmware from the device itself. One way to protect against this is to embed the encryption mechanisms in a dedicated chip that not only does the encryption but also stores the key material (Ala TPM in PCs/Windows). It’s much more difficult to attack TPM (not impossible). Cracking SHA256 is almost never the answer, at least until maybe quantum computing is a real thing.
While this /is/ conjecture, I spent 20y around and in cyber security performing some of these attacks myself. Once the stakes are raised, all it will take is enough time and money to crack the bypass the security mechanisms.
CAI has done an admirable job explaining their risk assessment, what they ARE trying to protect against and what they are NOT trying to protect against. The code, the discussion board, and the project are all very open for public comment. The manufacturers, on the other hand, have been very tight lipped. My questions would be to ask what kind of risk assessment/s they’ve done, what kind/s of threat actors they’re trying to mitigate, can the keys be revoked on camera and refreshed, and define their security boundaries – in other words, “show your work”. Adding CAI is a great step, and I’m eagerly awaiting for its addition to my Z8/Z9, but I’ve also been in the security industry long enough to see the patterns of an industry adopting new security mechanisms and the common ways they fail.
>Once the stakes are raised, all it will take is enough time and money to crack the bypass the security mechanisms.
Exactly. We’re talking about retail-level crimes here, not high stakes crimes. If somebody finds a cheap and dirty cyber attack, they’ll do it, sure. But it’s my position that, for the most part, we’ll just see bone simple physical or human factors attacks. I mentioned putting fake C2PA icons on your fake news site. Need a camera certificate? Just buy – or steal – the darned camera. Instead of busting their butt finagling the key out of the camera, they could use the so-called “analog hole” and use their stolen camera to copy their fake image. The quality won’t be as good as what could be achieved with a proper hack but that doesn’t matter. Consumers of disinformation want to believe. They won’t notice the flaws.
I think we all need to use whatever voice we have to urge all the parties, C2PA, camera makers, and developers, to carefully consider where the real real-world threats will be and try to mitigate them. Society-wide, we have a crappy record at this. We are great at making systems so “secure” that users can’t use them while not noticing some character pluck a Postit note off a wall to do the world’s largest (at the time) credit card breach.
I have more of a warm and fuzzy about Leica’s embedded key than Sony’s software-only approach. But I leave that argument for folks with your expertise. If Sony’s approach leads to a few more breaches but a lot more cameras on the street with signing capability, that would seem to be a win on balance. End of the day, the bad actors will show us where the soft spots are.