Skip to content

What Happens When AI-Generated Lies Are More Compelling than the Truth?

Technology
6 5 41
  • Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

    In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

    By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

    But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

    Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

  • Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

    In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

    By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

    But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

    Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

    This is 90% hyperbole. As always, believe none of what you hear and only half of what you see. We live most of our lives responding to shit we personally witnessed. Trust your senses. Of course the other part is a matter for concern, but not like the apocalyptic crowd would tell you.

    It is always a safe bet that the snake oil salespeople are, once again, selling snake oil.

  • This is 90% hyperbole. As always, believe none of what you hear and only half of what you see. We live most of our lives responding to shit we personally witnessed. Trust your senses. Of course the other part is a matter for concern, but not like the apocalyptic crowd would tell you.

    It is always a safe bet that the snake oil salespeople are, once again, selling snake oil.

    believe none of what you hear and only half of what you see.

    This has caused a huge amount of problems in the last decade or so, and isn't necessarily something to celebrate.

  • believe none of what you hear and only half of what you see.

    This has caused a huge amount of problems in the last decade or so, and isn't necessarily something to celebrate.

    Yep because in the end you‘ll believe in something regardless and that something is whatever sits right with you more often than not.

  • This is 90% hyperbole. As always, believe none of what you hear and only half of what you see. We live most of our lives responding to shit we personally witnessed. Trust your senses. Of course the other part is a matter for concern, but not like the apocalyptic crowd would tell you.

    It is always a safe bet that the snake oil salespeople are, once again, selling snake oil.

    The trick is, snake oil salesmen exist because there are customers. You might be smart enough to spot them coming, but many, many, many people are not. Being dismissive of scamming as an issue because you can spot them is like being dismissive of drownings because you know how to swim. It ignores the harm to your world done by having others around you destroyed, sometimes because they are cocky and hubristic, sometimes just because they were caught in a weak moment, just a bit too tired to notice the difference between rn and m in an email address.

  • Fake photographs have been around as long as photographs have been around. A widely circulated picture of Abraham Lincoln taken during the presidential campaign of 1860 was subtly altered by the photographer, Mathew Brady, to make the candidate appear more attractive. Brady enlarged Lincoln’s shirt collar, for instance, to hide his bony neck and bulging Adam’s apple.

    In a photographic portrait made to memorialize the president after his assassination, the artist Thomas Hicks transposed Lincoln’s head onto a more muscular man’s body to make the fallen president look heroic. (The body Hicks chose, perversely enough, was that of the proslavery zealot John C. Calhoun.)

    By the close of the nineteenth century, photographic negatives were routinely doctored in darkrooms, through such techniques as double exposure, splicing, and scraping and inking. Subtly altering a person’s features to obscure or exaggerate ethnic traits was particularly popular, for cosmetic and propagandistic purposes alike.

    But the old fakes were time-consuming to create and required specialized expertise. The new AI-generated “deepfakes” are different. By automating their production, tools like Midjourney and OpenAI’s DALL-E make the images easy to generate—you need only enter a text prompt. They democratize counterfeiting. Even more worrisome than the efficiency of their production is the fact that the fakes conjured up by artificial intelligence lack any referents in the real world. There’s no trail behind them that leads back to a camera recording an image of something that actually exists. There’s no original that was doctored. The fakes come out of nowhere. They furnish no evidence.

    Many fear that deepfakes, so convincing and so hard to trace, make it even more likely that people will be taken in by lies and propaganda on social media. A series of computer-generated videos featuring a strikingly realistic but entirely fabricated Tom Cruise fooled millions of unsuspecting viewers when it appeared on TikTok in 2021. The Cruise clips were funny. That wasn’t the case with the fake, sexually explicit images of celebrities that began flooding social media in 2024. In January, X was so overrun by pornographic, AI-generated pictures of Taylor Swift that it had to temporarily block users from searching the singer’s name.

    The thing about compelling lies is not that they are new, just that they are easier to expand. The most common effect of compelling lies is their ability to get well-intentioned people to support malign causes and give their money to fraudsters. So, expect that to expand, kind of like it already has been.

    The big question for me is what the response will be. Will we make lying illegal? Will we become a world of ever more paranoid isolationists, returning to clans, families, households, as the largest social group you can trust? Will most people even have the intelligence to see what is happenning and respond? Or will most people be turned into info-puppets, controlled into behaviours by manipulation of their information diet to an unprecedented degree? I don't know.

  • 2 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • Windows 11 finally overtakes Windows 10 [in marketshare]

    Technology technology
    32
    1
    63 Stimmen
    32 Beiträge
    190 Aufrufe
    H
    Yeah, and its most likely only due to them killing Windows 10 in the fall, which means a lot of companies have been working hard this year to replace a ton of computers before October. Anyone who has been down this road with 7 to 10 knows it will just cost more money if you need to continue support after that. They sell you a new license thats good for a year that will allow updates to continue. It doubles in cost every year after.
  • Microsoft axe another 9000 in continued AI push

    Technology technology
    24
    185 Stimmen
    24 Beiträge
    163 Aufrufe
    J
    Yeah my friend is dating a Google recruiter and he overhears some absurd offers. Like, a reasonable person could retire on a few years at that salary. I have a hypothesis that rich people are bad at money
  • You're not alone: This email from Google's Gemini team is concerning

    Technology technology
    298
    1
    838 Stimmen
    298 Beiträge
    2k Aufrufe
    M
    My understanding is that, in broad strokes... Aurora acts like a proxy or mirror that doesn't require you to sign in to get Google Play Store apps. It doesn't provide any other software besides what you specifically download from it, and it doesn't include any telemetry/tracking like normal Google Play Store would. microG is a reimplementation of Google Play services (the suite of proprietary background services that Google runs on normal Android phones). MicroG doesn't have the bloat and tracking and other closed source functionality, but rather acts as a stand-in that other apps can talk to (when they'd normally be talking to Google Play services). This has to be installed and configured and I would refer to the microG github or other documentation. GrapheneOS has its own sandboxed Google Play Services which is basically unmodified Google Play Services, crammed into its own sandbox with no special permissions, and a compatibility layer that retains some functionality while keeping it from being able to access app data with high level permissions like it would normally do on a vanilla Android phone.
  • China's Electric Vehicle Factories Have Become Tourist Hotspots

    Technology technology
    2
    1
    33 Stimmen
    2 Beiträge
    23 Aufrufe
    W
    I'd go to one. I went to Qatar and tried to find out if they did LPG tours. They don't. well at least not easily.
  • How not to lose your job to AI

    Technology technology
    16
    1
    9 Stimmen
    16 Beiträge
    76 Aufrufe
    rikudou@lemmings.worldR
    A nice "trick": After 4 or so responses where you can't get anywhere, start a new chat without the wrong context. Of course refine your question with whatever you have found out in the previous chat.
  • Build Custom WordPress Themes Easily with WP 1-Click

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 44 Stimmen
    4 Beiträge
    32 Aufrufe
    G
    It varies based on local legislation, so in some places paying ransoms is banned but it's by no means universal. It's totally valid to be against paying ransoms wherever possible, but it's not entirely black and white in some situations. For example, what if a hospital gets ransomed? Say they serve an area not served by other facilities, and if they can't get back online quickly people will die? Sounds dramatic, but critical public services get ransomed all the time and there are undeniable real world consequences. Recovery from ransomware can cost significantly more than a ransom payment if you're not prepared. It can also take months to years to recover, especially if you're simultaneously fighting to evict a persistent (annoyed, unpaid) threat actor from your environment. For the record I don't think ransoms should be paid in most scenarios, but I do think there is some nuance to consider here.