Watermarks offer no defense against deepfakes, study suggests
-
This post did not contain any content.
Watermarks offer no defense against deepfakes, study suggests
New research from the University of Waterloo's Cybersecurity and Privacy Institute demonstrates that any artificial intelligence (AI) image watermark can be removed, without the attacker needing to know the design of the ...
(techxplore.com)
I think maybe an update to the image format standards, where it like somehow includes a hash of the instrument that has taken the photo and video, and thus, only such media that can be verified to have been taken by a physical instrument can be used in like legal matters, or reporting or journals.
Either this hash can be verified by some algorithm, or maybe the media could depend on this hash in such a way that the media is corrupted if it gets altered.
-
I think maybe an update to the image format standards, where it like somehow includes a hash of the instrument that has taken the photo and video, and thus, only such media that can be verified to have been taken by a physical instrument can be used in like legal matters, or reporting or journals.
Either this hash can be verified by some algorithm, or maybe the media could depend on this hash in such a way that the media is corrupted if it gets altered.
There are already plans for metadata signing. I think some high end Canon cameras might do it already. It basically allows proof (via public private key of the hash) that a particular camera took that photo.
The idea is that you can create a chain of custody with an image. Each edit requires a new signature, with each party responsible for verifying the previous chain, to protect their own reputation.
It's far from perfect, but will help a lot with things like legal cases.
-
I think maybe an update to the image format standards, where it like somehow includes a hash of the instrument that has taken the photo and video, and thus, only such media that can be verified to have been taken by a physical instrument can be used in like legal matters, or reporting or journals.
Either this hash can be verified by some algorithm, or maybe the media could depend on this hash in such a way that the media is corrupted if it gets altered.
If the hash can be created at the time the inage/media is created then it can be faked.
-
I think maybe an update to the image format standards, where it like somehow includes a hash of the instrument that has taken the photo and video, and thus, only such media that can be verified to have been taken by a physical instrument can be used in like legal matters, or reporting or journals.
Either this hash can be verified by some algorithm, or maybe the media could depend on this hash in such a way that the media is corrupted if it gets altered.
So we should all have to throw away or phones, cameras, etc. And buy new ones that have proprietary hardware attestation?
-
This post did not contain any content.
Watermarks offer no defense against deepfakes, study suggests
New research from the University of Waterloo's Cybersecurity and Privacy Institute demonstrates that any artificial intelligence (AI) image watermark can be removed, without the attacker needing to know the design of the ...
(techxplore.com)
This is not surprising because AI mimics plausible patterns in images, watermarks are mostly made with noise
-
So we should all have to throw away or phones, cameras, etc. And buy new ones that have proprietary hardware attestation?
Oh come on, you're gonna do that in two years anyway.
-
If the hash can be created at the time the inage/media is created then it can be faked.
There are ways to be sure of authenticity, ways that can't be faked.
-
If the hash can be created at the time the inage/media is created then it can be faked.
Depends on the hash - some are tracable to a crypotographic public key and thus cannot be faked. Most are not but there are options that can be. Normally we refer to such things as signed not a hash but same thing to the layman who doesn't understand this.
-
This post did not contain any content.
Watermarks offer no defense against deepfakes, study suggests
New research from the University of Waterloo's Cybersecurity and Privacy Institute demonstrates that any artificial intelligence (AI) image watermark can be removed, without the attacker needing to know the design of the ...
(techxplore.com)
Are we gonna have to start using let's encrypt as part of photography?!
-
I think maybe an update to the image format standards, where it like somehow includes a hash of the instrument that has taken the photo and video, and thus, only such media that can be verified to have been taken by a physical instrument can be used in like legal matters, or reporting or journals.
Either this hash can be verified by some algorithm, or maybe the media could depend on this hash in such a way that the media is corrupted if it gets altered.
So you want to EMB make, model and SN into images now? Metadata on crack
-
Are we gonna have to start using let's encrypt as part of photography?!
Leica, Sony, Nikon, and Canon have backed an Adobe-created digital signature solution for authentication of photos directly in the camera, including metadata like time/date stamps. But that's mainly for journalism and professional grade cameras, not the cell phones that 90+% of new images are created on.
-
Depends on the hash - some are tracable to a crypotographic public key and thus cannot be faked. Most are not but there are options that can be. Normally we refer to such things as signed not a hash but same thing to the layman who doesn't understand this.
In order for it to be traceable with a public key, it needs to be signed with the private key. That means the private key has to be on the camera. That means it can be extracted from the camera and leaked.
-
I think maybe an update to the image format standards, where it like somehow includes a hash of the instrument that has taken the photo and video, and thus, only such media that can be verified to have been taken by a physical instrument can be used in like legal matters, or reporting or journals.
Either this hash can be verified by some algorithm, or maybe the media could depend on this hash in such a way that the media is corrupted if it gets altered.
The obvious limitation being that you can take a real photo with attestation with a real camera of a real computer screen displaying any fake shit you can imagine, then you have an officially hashed photo of anything.
-
This post did not contain any content.
Watermarks offer no defense against deepfakes, study suggests
New research from the University of Waterloo's Cybersecurity and Privacy Institute demonstrates that any artificial intelligence (AI) image watermark can be removed, without the attacker needing to know the design of the ...
(techxplore.com)
There is a solution, but y'all aren't going to like it.
The solution is blockchain. Actually, it's even worse, the solution is NFT's.
Not the scammy, crypto bro, nonsense it has been used for; but the actual technology.
A cryptographically secure digital token that can track where something was made, where it's being used, who has the rights to it, and ensures that it's authentic and not some copy made with AI.
Unfortunately, thanks to crypto bros, the technology has become so tainted by scams that most people get upset just hearing the letters NFT, so adoption isn't likely.
-
There is a solution, but y'all aren't going to like it.
The solution is blockchain. Actually, it's even worse, the solution is NFT's.
Not the scammy, crypto bro, nonsense it has been used for; but the actual technology.
A cryptographically secure digital token that can track where something was made, where it's being used, who has the rights to it, and ensures that it's authentic and not some copy made with AI.
Unfortunately, thanks to crypto bros, the technology has become so tainted by scams that most people get upset just hearing the letters NFT, so adoption isn't likely.
I don't think this is that controversial. If you take out NFTs, it's using the block chain as a hash. I think that works, but at that point you might as well use regular hashes to verify the integrity of your video
-
There is a solution, but y'all aren't going to like it.
The solution is blockchain. Actually, it's even worse, the solution is NFT's.
Not the scammy, crypto bro, nonsense it has been used for; but the actual technology.
A cryptographically secure digital token that can track where something was made, where it's being used, who has the rights to it, and ensures that it's authentic and not some copy made with AI.
Unfortunately, thanks to crypto bros, the technology has become so tainted by scams that most people get upset just hearing the letters NFT, so adoption isn't likely.
There are other privacy issues with having an indelible marker as to the origin and chain of custody of every digital artifact. And other non-privacy issues.
So the idea here is that my phone camera attaches a crypro token to the metadata of every photo it takes? (Or worse, embeds it into the image steganographically like printer dots.) Then if I send that photo to a friend in signal, that app attaches a token indicating the transfer? And so on?
If that's a video of say, police murdering someone, maybe I don't want a perfect trail pointing back to me just to prove I didnt deep fake it. And if that's where we are, then every video of power being abused is going to "be fake" because no sane person would sacrifice their privacy, possibly their life, to "prove" a video isnt AI generated.
And those in power, the mainstream media say, aren't going to demonstrate the crypto chain of custody on every video they show on the news. They're going to show whatever they want, then say "its legit, trust us!" and most people will.
These are the fundamental issues with crypto that people actually don't understand: too much of it is actually opt-in, it's unclear to most people what's actually proved or protected, and it doesn't actually address or understsnd where trust, authority, and power actually come from.
-
In order for it to be traceable with a public key, it needs to be signed with the private key. That means the private key has to be on the camera. That means it can be extracted from the camera and leaked.
Maybe. There are ways to assign a private key that is not easy to extract. a chip that creates a private key on first poweron and then saves to internal memory for example.
-
Maybe. There are ways to assign a private key that is not easy to extract. a chip that creates a private key on first poweron and then saves to internal memory for example.
Not easy to extract sure, but is it secure enough for you to claim that it hasnt been leaked and so forms a secure chain of custody? Once one has been leaked then that can be used to sign any fake pictures you like. I woudnt buy that for anything for serious than is this meme picture real.
-
The obvious limitation being that you can take a real photo with attestation with a real camera of a real computer screen displaying any fake shit you can imagine, then you have an officially hashed photo of anything.
If you've ever tried this, the moire pattern of pixels is obvious. You'd need a much higher resolution display than image sensor.
-
I don't think this is that controversial. If you take out NFTs, it's using the block chain as a hash. I think that works, but at that point you might as well use regular hashes to verify the integrity of your video
at that point you might as well use regular hashes to verify the integrity of your video
Generated by what authority, though?