Skip to content

Here’s how to spot AI writing, according to Wikipedia

Technology
16 15 0
  • This post did not contain any content.
  • This post did not contain any content.

    That is an excellent read, not just for how to spot AI but to also many examples of non-neutral wording.

    It is honestly pretty entertaining how many of the common signs of AI output do not fit the Wikipedia style requirements, and how many of them remind me of high school essays.

  • This post did not contain any content.

    Mental note: add a step to my Wikipedia article generator agentic flow that has the LLM check this page and remove these giveaways.

  • This post did not contain any content.

    Something that popped up when I was looking up information for Eyes of Wakanda:

    Which seems copy pasted from another AI generated site:

    FWIW - NONE of this is true. Okoye does not appear in any of the four episodes, voiced by Danai Gurira or anyone else. Given the times the episodes are set in, 1260 BC, 1200 BC, 1400 AD and 1896 AD, that would have been impossible.

    Episode 4 does have a future Black Panther, but she's from 500 years in the future (1896 + 500 = 2396?) and voiced by Anika Noni Rose.

  • This post did not contain any content.

    I'd gotten really good at discerning a chatGPT bot from a human account just from years of catching bots on Reddit.

    There's a lot of red flags and tells that would be very hard to completely eradicate. There will always be an uncanny valley.

  • Mental note: add a step to my Wikipedia article generator agentic flow that has the LLM check this page and remove these giveaways.

    Pretty much just have the LLM reduce the size of its answer, proofread it that it makes sense and didn't screw punctuation and boom no one will no.

  • This post did not contain any content.

    Really helpful page thank you for sharing.

  • I'd gotten really good at discerning a chatGPT bot from a human account just from years of catching bots on Reddit.

    There's a lot of red flags and tells that would be very hard to completely eradicate. There will always be an uncanny valley.

    You think you have - but there’s really no way of knowing.

    Just because someone writes like a bot doesn’t mean they actually are one. Feeling like "you’ve caught one" doesn’t mean you did - it just means you think you did. You might have been wrong, but you never got confirmation to know for sure, so you have no real basis for judging how good your detection rate actually is. It’s effectively begging the question - using your original assumption as "proof" without actual verification.

    And then there’s the classic toupee fallacy: "All toupees look fake - I’ve never seen one that didn’t." That just means you’re good at spotting bad toupees. You can’t generalize from that and claim you’re good at detecting toupees in general, because all the good ones slip right past you unnoticed.

  • You think you have - but there’s really no way of knowing.

    Just because someone writes like a bot doesn’t mean they actually are one. Feeling like "you’ve caught one" doesn’t mean you did - it just means you think you did. You might have been wrong, but you never got confirmation to know for sure, so you have no real basis for judging how good your detection rate actually is. It’s effectively begging the question - using your original assumption as "proof" without actual verification.

    And then there’s the classic toupee fallacy: "All toupees look fake - I’ve never seen one that didn’t." That just means you’re good at spotting bad toupees. You can’t generalize from that and claim you’re good at detecting toupees in general, because all the good ones slip right past you unnoticed.

    Studies have shown we're pretty bad at detecting good AI stuff, regardless of how skilled we think we are. It's the crappy AI slop that makes everybody think they're Sherlock Holmes.

  • This post did not contain any content.

    I was beginning to think I was smarter than the internet.
    In some facets we all are much smarter than AI.
    However, we are not all clever enough how to explain and express ourselves to the fullest.
    Slight variations in nuance are crucial to the humour of AI.
    Otherwise a giant entity resembling human consciousness is taking form.
    The last F’ng thing I’d like to see is a “Lawnmower Man” type scenario that takes every word ever said or googled by anyone for its literal translation be it completely metaphorical and without the understanding of underlying context.
    Sometimes those thoughts creep into my dreams.

  • You think you have - but there’s really no way of knowing.

    Just because someone writes like a bot doesn’t mean they actually are one. Feeling like "you’ve caught one" doesn’t mean you did - it just means you think you did. You might have been wrong, but you never got confirmation to know for sure, so you have no real basis for judging how good your detection rate actually is. It’s effectively begging the question - using your original assumption as "proof" without actual verification.

    And then there’s the classic toupee fallacy: "All toupees look fake - I’ve never seen one that didn’t." That just means you’re good at spotting bad toupees. You can’t generalize from that and claim you’re good at detecting toupees in general, because all the good ones slip right past you unnoticed.

    Just to add to this. I've noticed some of my co workers who have developing English skills sound like LLMs sometimes.
    I imagine this is probably because they've only wrote English in a school or work setting and never for personal communication.

    ...or maybe they're all just using LLMs idk

  • Something that popped up when I was looking up information for Eyes of Wakanda:

    Which seems copy pasted from another AI generated site:

    FWIW - NONE of this is true. Okoye does not appear in any of the four episodes, voiced by Danai Gurira or anyone else. Given the times the episodes are set in, 1260 BC, 1200 BC, 1400 AD and 1896 AD, that would have been impossible.

    Episode 4 does have a future Black Panther, but she's from 500 years in the future (1896 + 500 = 2396?) and voiced by Anika Noni Rose.

    Yup, there seems to be misinformation there. Even perplexity gets it wrong, but does say there's some inconsistency between websites.

  • Pretty much just have the LLM reduce the size of its answer, proofread it that it makes sense and didn't screw punctuation and boom no one will no.

    and boom no one will no.

    You're not an LLM, for sure.

  • You think you have - but there’s really no way of knowing.

    Just because someone writes like a bot doesn’t mean they actually are one. Feeling like "you’ve caught one" doesn’t mean you did - it just means you think you did. You might have been wrong, but you never got confirmation to know for sure, so you have no real basis for judging how good your detection rate actually is. It’s effectively begging the question - using your original assumption as "proof" without actual verification.

    And then there’s the classic toupee fallacy: "All toupees look fake - I’ve never seen one that didn’t." That just means you’re good at spotting bad toupees. You can’t generalize from that and claim you’re good at detecting toupees in general, because all the good ones slip right past you unnoticed.

    I mean I wasn't going around accusing everyone of being a bot or thinking that I was right all the time. I did have a few false positives and owned up to it. But once you see the pattern of behavior (big gap on joined date vs first active date, only posting in karma farming subs or subs known to have high bot populations) and accounts literally keeping the "as a large language model..." Or "Okay, here's a supportive Reddit-style comment with some minor spelling mistakes..." in some of their comments, ads at the end of their comments, and posting all hours of the day without any gaps for sleep or work, or posting fragments comments identical to ones posted months or years earlier by someone else, you start to realize maybe these accounts might not be genuine.

    I have screenshots to prove it but if you really believe I don't know what I'm talking about then there's really nothing I can say to dissuade that.

  • This post did not contain any content.

    That's why I like make basic grammatical mistakes, speling erors, and include a few fucks in my internet writing. Nobody's not gona mistake me for no got dagned robot.

  • I mean I wasn't going around accusing everyone of being a bot or thinking that I was right all the time. I did have a few false positives and owned up to it. But once you see the pattern of behavior (big gap on joined date vs first active date, only posting in karma farming subs or subs known to have high bot populations) and accounts literally keeping the "as a large language model..." Or "Okay, here's a supportive Reddit-style comment with some minor spelling mistakes..." in some of their comments, ads at the end of their comments, and posting all hours of the day without any gaps for sleep or work, or posting fragments comments identical to ones posted months or years earlier by someone else, you start to realize maybe these accounts might not be genuine.

    I have screenshots to prove it but if you really believe I don't know what I'm talking about then there's really nothing I can say to dissuade that.

    I don't think anyone is questioning your ability to follow a hunch and get evidence to prove it.

    The point is about how often you're correct on your first guess.