Skip to content

Marginalized Americans are highly skeptical of artificial intelligence

Technology
34 25 0
  • You'd hope, and yet I've had people on Lemmy give me shit for being overtly anti-llm

    There's a difference between healthy skepticism and invalid, knee-jerk opposition.

    LLMs are a useful tool sometimes, and I use them for refining general ideas into specific things to research, and they're pretty good at that. Sure, what they output isn't trustworthy on its own, but I can pretty easily verify most of what it spits out, and it does a great job of spitting out a lot of stuff that's related to what I asked.

    For example, I'm a SW dev, so I'll often ask it stuff like, "compare and contrast popular projects that do X", and it'll find a few for me and give easily-verifiable details about each one. Sometimes it's wrong on one or two details, but it gives me enough to decide which ones I want to look more deeply into. Or I'll do some greenfield research into a topic I'm not familiar with, and it does a fantastic job of pulling out keywords and other domain-specific stuff that help refine what I search for.

    LLMs do a lot less than their proponents claim, but they also do a lot more than detractors claim. They're a useful tool if you understand the limitations and have a rough idea of how they work. They're a terrible tool if you buy into the BS coming from the large corps pushing them. I will absolutely push back against people on both extremes.

  • Remember how a few years ago 3d displays and VR were being shoved in everyone's faces? I can see the current "AI" trend going the same way.

    VR is still cool and will probably always be cool, but I doubt it'll never be mainstream. 3D was just awkward, and they really just wanted VR but the tech wasn't there yet.

    I own neither, yet I've been considering VR for a few years now, just waiting for more headsets to have proper Linux support before I get one.

    Likewise, I'm not paying for LLMs, but I do use the ones my workplace provides. They're useful sometimes, and it's nice to have them as an option when I hit a wall or something. I think they're interesting and useful, but not nearly as powerful as the big corporations want you to think.

  • I think just about everyone who is not an executive at a tech company is highly skeptical of AI.

    I was just trying to figure out how to express that exact sentiment. Thank you.

  • I'm overtly anti-llm. I don't think it's dramatic at all to be so.

    Enough has come out about how much power and water datacenters used to train and run it consume, people being driven insane by it, investors hoping to displace jobs with it, how over reliance on it diminishes your mental faculties, people from minors to adults using it to create deepfake porn of minors (literally it's on lemmy rn https://lemmy.ml/post/32581009), its use in overt misinformation (particularly from our modern warzones and disaster areas), overt theft of writing and artistry to train these things, and last but not least: limitless spam.

    I'm affected by most of those things indirectly, but the spam affects me daily. Can't search for something on the net anymore without being served f-tier LLM-produced garbage.

    So what are the good parts? Doesn't seem like they outweigh these bad parts, whatever they are.

    Can’t search for something on the net anymore without being served f-tier LLM-produced garbage.

    I don't see a material difference vs the f-tier human-produced garbage we had before. Garbage content will always exist, which is why it's important to learn to how to filter it.

    This is true of LLMs as well: they can and do produce garbage, but they can and are useful alternatives to existing tech. I don't use them exclusively, but as an alternative when traditional search or whatever isn't working, they're quite useful. They provide rough summaries about things that I can usually easily verify, and they produce a bunch of key words that can help refine my future searches. I use them a handful of times each week and spend more time using traditional search and reading full articles, but I do find LLMs to be a useful tool in my toolbox.

    I also am frustrated by energy use, but it's one of those things that will get better over time as the LLM market matures from a gold rush into established businesses that need to actually make money. The same happens w/ pretty much every new thing in tech, there's a ton of waste until the product finds its legs and then becomes a lot more efficient.

  • You'd hope, and yet I've had people on Lemmy give me shit for being overtly anti-llm

    I mean there is place in between highly skeptical and anti. I think its a faster and more convenient search as long as it gives sources and it makes creating and editing media easier. I don't like the energy usage and do like work bringing that down. Its just trying to get it to solve things on its own that seems to be pushed when we can clearly see it not working when used like that. I think the biggest issue is its crammed in as a solution and it works in the most half assed manner and they want to say that fine.

  • I don’t blame them for being skeptical. Anything that corporations/rich people are enthusiastic about usually ends up screwing them.

    A certain amount of skepticism is healthy, but it's also quite common for people to go overboard and completely avoid a useful thing just because some rich idiot is pushing it. I've seen a lot of misinformation here on Lemmy about LLMs because people hate the environment its in (layoffs in the name of replacing people with "AI"), but they completely ignore the merit the tech has (great at summarizing and providing decent results from vague queries). If used properly, LLMs can be quite useful, but people hyper-focus on the negatives, probably because they hate the marketing material and the exceptional cases the news is great at shining a spotlight on.

    I also am skeptical about LLMs usefulness, but I also find them useful in some narrow use-cases I have at work. It's not going to actually replace any of my coworkers anytime soon, but it does help me be a bit more productive since it's yet another option to get me unstuck when I hit a wall.

    Just because there's something bad about something doesn't make the tech useless. If something gets a ton of funding, there's probably some merit to it, so turn your skepticism into a healthy quest for truth and maybe you'll figure out how to benefit from it.

    For example, the hype around cryptocurrency makes it easy to knee-jerk reject the technology outright, because it looks like it's merely a tool to scam people out of their money. That is partially true, but it's also a tool to make anonymous transactions feasible. Yes, there are scammers out there pushing worthless coins in a pump and dump scheme, but there are also privacy-focused coins (Monero, Z-Cash, etc) that are being used today to help fund activists operating under repressive regimes. It's also used by people doing illegal things, but hey, so is cash, and privacy coins are basically easier to use cash. We probably wouldn't have had those w/o Bitcoin, though they use very different technology under the hood to achieve their aims. Maybe they're not for you, but they do help people.

    Instead of focusing on the bad of a new technology, more people should focus on the good, and then weigh for themselves whether the good is worth the bad. I think in many cases it is, but only if people are sufficiently informed about how to use them to their advantage.

  • Most of these arguments were made for computers back when they were gaining popularity fyi.

    The people outsourcing their thinking to LLMs weren’t gonna do much thinking in the first place. And honestly once you use them for a while you quickly realize what their good uses are and what are their limitations and thinking is not its strong suit. But it’s great at sorting large data and making it digestible. Or writing corpo copy that was devoid of meaning anyways.

    Remember that a hammer can kill a person just as well as it can build a house.

    Now I agree that it is annoying that it is being shoved into everything without any good reason, but the market will sort that out. What you are seeing is everyone rushing into a nascent market before it ossifies and shakes everyone except one or two winners. In 10 years I’m sure LLMs will be more like you have one that you plug into every service you use and it will be provided by one of a handful of companies who are the only ones capable of profiting from this because of the economies of scales it requires to work. Ergo not very different from every other tech rush that has happened in history.

    LLMs are tools, simple as. Being a Luddite, screaming and kicking and crying over them is not gonna make it go away any more than boomers crying over computers have managed to make computers go away.

    Your last paragraph implies that I'm naive for believing that complaining about it will make it go away, but I've done no such thing.

    the market will sort that out

    This is the naive statement.

  • In this study, we conducted a survey (n = 742) including a representative U.S. sample and an oversample of gender minorities, racial minorities, and disabled individuals to examine how demographic factors shape AI attitudes.

    Thanks for the actual response. Personally I think you sample size is way too low, and the selection is skewed towards people that already feel marginalized, which will in turn, skew your results

  • My problem with LLMs is that they're expert pattern matchers and little else.

    Ask them the integral from 1-5 of ln(x) and they're sure to screw it up.

    They'll give you something that sounds like the right answer, but their explanations are nonsense.

    Exactly... I advise anyone with some kind of expertise to ask chat gpt some questions about your specific field, and see how accurate it is... Then try to ever believe it about anything else ever again.

  • Thanks for the actual response. Personally I think you sample size is way too low, and the selection is skewed towards people that already feel marginalized, which will in turn, skew your results

    I looked into that and the only question I really have is how geographically distributed the samples were. Other than that, It was an oversampled study, so <50% of the people were the control, of sorts. I don't fully understand how the sampling worked, but there is a substantial chart at the bottom of the study that shows the full distribution of responses. Even with under 1000 people, it seems legit.

  • Cloudflare to AI Crawlers: Pay or be blocked

    Technology technology
    15
    1
    179 Stimmen
    15 Beiträge
    2 Aufrufe
    F
    Make a dummy Google Account, and log into it when on the VPN. Having an ad history avoids the blocks usually. (Note: only do this if your browsing is not activist related/etc) Also, if it's image captchas that never end, switch to the accessibility option for the captcha.
  • 148 Stimmen
    15 Beiträge
    1 Aufrufe
    M
    Don't get them wrong, they don't do this for you, or even morals. It just affects other interests too much.
  • 9 Stimmen
    6 Beiträge
    2 Aufrufe
    F
    You said it yourself: extra places that need human attention ... those need ... humans, right? It's easy to say "let AI find the mistakes". But that tells us nothing at all. There's no substance. It's just a sales pitch for snake oil. In reality, there are various ways one can leverage technology to identify various errors, but that only happens through the focused actions of people who actually understand the details of what's happening. And think about it here. We already have computer systems that monitor patients' real-time data when they're hospitalized. We already have systems that check for allergies in prescribed medication. We already have systems for all kinds of safety mechanisms. We're already using safety tech in hospitals, so what can be inferred from a vague headline about AI doing something that's ... checks notes ... already being done? ... Yeah, the safe money is that it's just a scam.
  • Role of Email Deliverability Consulting in ROI

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • 376 Stimmen
    51 Beiträge
    15 Aufrufe
    L
    I believe that's what a write down generally reflects: The asset is now worth less than its previous book value. Resale value isn't the most accurate way to look at it, but it generally works for explaining it: If I bought a tool for 100€, I'd book it as 100€ worth of tools. If I wanted to sell it again after using it for a while, I'd get less than those 100€ back for it, so I'd write down that difference as a loss. With buying / depreciating / selling companies instead of tools, things become more complex, but the basic idea still holds: If the whole of the company's value goes down, you write down the difference too. So unless these guys bought it for five times its value, they'll have paid less for it than they originally got.
  • 353 Stimmen
    40 Beiträge
    23 Aufrufe
    L
    If AI constantly refined its own output, sure, unless it hits a wall eventually or starts spewing bullshit because of some quirk of training. But I doubt it could learn to summarise better without external input, just like a compiler won't produce a more optimised version of itself without human development work.
  • X/Twitter Pause Encrypted DMs.

    Technology technology
    52
    2
    257 Stimmen
    52 Beiträge
    26 Aufrufe
    L
    There may be several reasons for this. If I had to guess, they found a critical flaw and had to shut it down for security reasons.
  • 32 Stimmen
    8 Beiträge
    20 Aufrufe
    J
    Apparently, it was required to be allowed in that state: Reading a bit more, during the sentencing phase in that state people making victim impact statements can choose their format for expression, and it's entirely allowed to make statements about what other people would say. So the judge didn't actually have grounds to deny it. No jury during that phase, so it's just the judge listening to free form requests in both directions. It's gross, but the rules very much allow the sister to make a statement about what she believes her brother would have wanted to say, in whatever format she wanted. From: https://sh.itjust.works/comment/18471175 influence the sentence From what I've seen, to be fair, judges' decisions have varied wildly regardless, sadly, and sentences should be more standardized. I wonder what it would've been otherwise.