Skip to content

What Happens When AI-Generated Lies Are More Compelling than the Truth?

Technology
6 5 3
  • Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

    In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

    By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

    But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

    Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

  • Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

    In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

    By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

    But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

    Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

    This is 90% hyperbole. As always, believe none of what you hear and only half of what you see. We live most of our lives responding to shit we personally witnessed. Trust your senses. Of course the other part is a matter for concern, but not like the apocalyptic crowd would tell you.

    It is always a safe bet that the snake oil salespeople are, once again, selling snake oil.

  • This is 90% hyperbole. As always, believe none of what you hear and only half of what you see. We live most of our lives responding to shit we personally witnessed. Trust your senses. Of course the other part is a matter for concern, but not like the apocalyptic crowd would tell you.

    It is always a safe bet that the snake oil salespeople are, once again, selling snake oil.

    believe none of what you hear and only half of what you see.

    This has caused a huge amount of problems in the last decade or so, and isn't necessarily something to celebrate.

  • believe none of what you hear and only half of what you see.

    This has caused a huge amount of problems in the last decade or so, and isn't necessarily something to celebrate.

    Yep because in the end you‘ll believe in something regardless and that something is whatever sits right with you more often than not.

  • This is 90% hyperbole. As always, believe none of what you hear and only half of what you see. We live most of our lives responding to shit we personally witnessed. Trust your senses. Of course the other part is a matter for concern, but not like the apocalyptic crowd would tell you.

    It is always a safe bet that the snake oil salespeople are, once again, selling snake oil.

    The trick is, snake oil salesmen exist because there are customers. You might be smart enough to spot them coming, but many, many, many people are not. Being dismissive of scamming as an issue because you can spot them is like being dismissive of drownings because you know how to swim. It ignores the harm to your world done by having others around you destroyed, sometimes because they are cocky and hubristic, sometimes just because they were caught in a weak moment, just a bit too tired to notice the difference between rn and m in an email address.

  • Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

    In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

    By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

    But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

    Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

    The thing about compelling lies is not that they are new, just that they are easier to expand. The most common effect of compelling lies is their ability to get well-intentioned people to support malign causes and give their money to fraudsters. So, expect that to expand, kind of like it already has been.

    The big question for me is what the response will be. Will we make lying illegal? Will we become a world of ever more paranoid isolationists, returning to clans, families, households, as the largest social group you can trust? Will most people even have the intelligence to see what is happenning and respond? Or will most people be turned into info-puppets, controlled into behaviours by manipulation of their information diet to an unprecedented degree? I don't know.

  • 56 Stimmen
    7 Beiträge
    0 Aufrufe
    fizz@lemmy.nzF
    This is exciting and terrifying. I am NOT looking forward to the future anymore.
  • 1k Stimmen
    95 Beiträge
    1 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • Affordable Assignments

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    3 Aufrufe
    Niemand hat geantwortet
  • 6 Stimmen
    9 Beiträge
    2 Aufrufe
    blue_berry@lemmy.worldB
    Cool. Well, the feedback until now was rather lukewarm. But that's fine, I'm now going more in a P2P-direction. It would be cool to have a way for everybody to participate in the training of big AI models in case HuggingFace enshittifies
  • 375 Stimmen
    69 Beiträge
    2 Aufrufe
    T
    In those situations I usually enable 1.5x.
  • Microsoft is putting AI actions into the Windows File Explorer

    Technology technology
    11
    1
    1 Stimmen
    11 Beiträge
    3 Aufrufe
    I
    Cool, so that's a specific problem with your needed use case. That's not what you said before.
  • 1 Stimmen
    8 Beiträge
    3 Aufrufe
    L
    I made a PayPal account like 20 years ago in a third world country. The only thing you needed then is an email and password. I have no real name on there and no PII, technically my bank card is attached but on PP itself there's no KYC. I think you could probably use some types of prepaid cards with it if you want to avoid using a bank altogether but for me this wasn't an issue, I just didn't want my ID on any records, I don't have any serious OpSec concerns otherwise. I'm sure you could either buy PayPal accounts like this if you needed to, or make one in a country that doesn't have KYC laws somehow. From there I'd add money to my balance and send money as F&F. At no point did I need an ID so in that sense there's no KYC. Some sellers on localmarket were fancy enough to list that they wanted an ID for KYC, but I'm sure you could just send them any random ID you made in paint from the republic of dave and you'd be fine.
  • 14 Stimmen
    10 Beiträge
    2 Aufrufe
    M
    Exactly, we don’t know how the brain would adapt to having electric impulses wired right in to it, and it could adapt in some seriously negative ways.