Skip to content

What Happens When AI-Generated Lies Are More Compelling than the Truth?

Technology
6 5 41
  • Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

    In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

    By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

    But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

    Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

  • Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

    In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

    By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

    But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

    Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

    This is 90% hyperbole. As always, believe none of what you hear and only half of what you see. We live most of our lives responding to shit we personally witnessed. Trust your senses. Of course the other part is a matter for concern, but not like the apocalyptic crowd would tell you.

    It is always a safe bet that the snake oil salespeople are, once again, selling snake oil.

  • This is 90% hyperbole. As always, believe none of what you hear and only half of what you see. We live most of our lives responding to shit we personally witnessed. Trust your senses. Of course the other part is a matter for concern, but not like the apocalyptic crowd would tell you.

    It is always a safe bet that the snake oil salespeople are, once again, selling snake oil.

    believe none of what you hear and only half of what you see.

    This has caused a huge amount of problems in the last decade or so, and isn't necessarily something to celebrate.

  • believe none of what you hear and only half of what you see.

    This has caused a huge amount of problems in the last decade or so, and isn't necessarily something to celebrate.

    Yep because in the end you‘ll believe in something regardless and that something is whatever sits right with you more often than not.

  • This is 90% hyperbole. As always, believe none of what you hear and only half of what you see. We live most of our lives responding to shit we personally witnessed. Trust your senses. Of course the other part is a matter for concern, but not like the apocalyptic crowd would tell you.

    It is always a safe bet that the snake oil salespeople are, once again, selling snake oil.

    The trick is, snake oil salesmen exist because there are customers. You might be smart enough to spot them coming, but many, many, many people are not. Being dismissive of scamming as an issue because you can spot them is like being dismissive of drownings because you know how to swim. It ignores the harm to your world done by having others around you destroyed, sometimes because they are cocky and hubristic, sometimes just because they were caught in a weak moment, just a bit too tired to notice the difference between rn and m in an email address.

  • Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

    In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

    By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

    But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

    Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

    The thing about compelling lies is not that they are new, just that they are easier to expand. The most common effect of compelling lies is their ability to get well-intentioned people to support malign causes and give their money to fraudsters. So, expect that to expand, kind of like it already has been.

    The big question for me is what the response will be. Will we make lying illegal? Will we become a world of ever more paranoid isolationists, returning to clans, families, households, as the largest social group you can trust? Will most people even have the intelligence to see what is happenning and respond? Or will most people be turned into info-puppets, controlled into behaviours by manipulation of their information diet to an unprecedented degree? I don't know.

  • 149 Stimmen
    4 Beiträge
    38 Aufrufe
    T
    Very true. And the fine will be raised for next time, so you really dont want strike one.
  • 106 Stimmen
    28 Beiträge
    212 Aufrufe
    D
    Wait, we need compulsory ID checks to visit adult content but no checks with Chatgpt who is there to help you plan your suicide? We are about to face an epidemic of AI cat fishing, scams, and unhealthy relationships that corporations are pushing on us. This is like the Atomic bomb only with propaganda and psychological manipulation. The war for the human mind just got a shortcut and the Techbros are in charge.
  • 300 Stimmen
    47 Beiträge
    311 Aufrufe
    T
    I worked in a bank for a bit. Literally any transaction that's large and unusual for the account will be flagged. Also people do bonkers things with their money for the stupidest reasons all the time so all that one has to do if they're making large transactions is be prepared to talk to the bank and explain what's going on. Unless of course you are handling money in relation to organized crime, in which case you were fucked the moment the money touched the banking system
  • 178 Stimmen
    9 Beiträge
    54 Aufrufe
    R
    They've probably just crunched the numbers and determined the cost of a recall in Canada was greater than the cost of law suits when your house does burn down
  • 61 Stimmen
    11 Beiträge
    59 Aufrufe
    K
    If you use LLMs like they should be, i.e. as autocomplete, they're helpful. Classic autocomplete can't see me type "import" and correctly guess that I want to import a file that I just created, but Copilot can. You shouldn't expect it to understand code, but it can type more quickly than you and plug the right things in more often than not.
  • 2k Stimmen
    133 Beiträge
    622 Aufrufe
    S
    Tokyo banned diesel motors in the late 90s. As far as I know that didn't kill Toyota. At the same time European car makers started to lobby for particle filters that were supposed to solve everything. The politics who where naive enough to believe them do share responsibility, but not as much as the european auto industry that created this whole situation. Also, you implies that laws are made by politicians without any intervention of the industries whatsoever. I think you know that it is not how it works.
  • Adobe Creative Cloud subscriptions are getting more expensive

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    1 Aufrufe
    L
    I just used a free online thing called PDF2Go to split a giant PDF into 4 smaller files. It let me directly download the resulting 4 files without signing up for anything, and they work perfectly on my box (linux mint). Tbh I don't think the UI is super intuitive but I just googled "How to split a file on pdf2go" and found clear instructions. It has a lot of other tools I have not explored. Not affiliated with the site in any way, sharing because Adobe is so freaking expensive. https://www.pdf2go.com/
  • [paper] Evidence of a social evaluation penalty for using AI

    Technology technology
    10
    28 Stimmen
    10 Beiträge
    61 Aufrufe
    vendetta9076@sh.itjust.worksV
    I'm specifically talking about toil when it comes to my job as a software developer. I already know I need an if statement and a for loop all wrapped in a try catch. Rather then spending a couple minutes coding that I have cursor do it for me instantly then fill out the actual code. Or, ive written something in python and it needs to be converted to JavaScript. I can ask Claude to convert it one to one for me and test it, which comes back with either no errors or a very simple error I need to fix. It takes a minute. Instead I could have taken 15min to rewrite it myself and maybe make more mistakes that take longer.