Skip to content

It Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes

Technology
87 50 6
  • In favour of AI absolutely, against it, no I can't. What group would want to disvalue AI, after all most of the big tech companies are developing their own. They would want people to use AI, that's the only way they make a profit.

    You keep providing these vague justifications for your belief but you never actually provide a concrete answer.

    Which groups in particular do you think are paying people to astroturf with negative AI comments? Which actual organisations, which companys? Do you have evidence for this beyond "lots of people on a technically inclined forum don't like it" because that seems to be a fairly self-selecting set. You are seeing patterns in the clouds and are insisting that they are meaningful.

    You call it “patterns in the clouds,” but that’s how coordinated media campaigns are meant to look organic, coincidental, invisible unless you recognize the fingerprints. Spotting those fingerprints isn’t tinfoil-hat stuff, it’s basic media literacy.

    And let’s be real: plenty of groups have motives to discourage everyday people from embracing AI.

    Political think tanks and content farms (Heritage Foundation, Koch networks...) already pay for astroturfing campaigns and troll farms. They do it on issues like immigration, climate, and COVID. Why would AI magically be exempt?

    Reputation management/PR firms (Bent Pixels, marketing shops, crisis comms firms) literally get paid to scrub and reshape narratives online. Their business model depends on you not having the same tools for cheap or free.

    Established media and gatekeepers survive on controlling distribution pipelines. The more people use AI to generate, remix, and distribute their own content, the less leverage those outlets have.

    Now why does this matter with AI in particular? Because AI isn’t just another app it’s a force multiplier for individuals.

    A single parent can spin up an online store, write copy, generate images, and market it without hiring an agency.

    A student can build an interactive study tool in a weekend that used to take a funded research lab.

    An activist group can draft policy briefs, make explainer videos, and coordinate messaging with almost no budget.

    These kinds of tools only get created if ordinary people are experimenting, collaborating, and embracing AI. That’s what the “don’t trust AI” narrative is designed to discourage. If you keep people from touching it, you keep them dependent on the existing gatekeepers.

    So flip your own question: who pays for these narratives? The same people who already fund copy-paste headline campaigns like “illegals are taking our jobs and assaulting Americans.” It’s the same yellow-journalism playbook, just aimed at a new target.

    Dismissing this as “cloud patterns” is the exact mindset they hope you have. Because if you actually acknowledge how coordinated media framing works, you start to see why of course there are groups with the motive and budget to poison the well on AI.

  • You call it “patterns in the clouds,” but that’s how coordinated media campaigns are meant to look organic, coincidental, invisible unless you recognize the fingerprints. Spotting those fingerprints isn’t tinfoil-hat stuff, it’s basic media literacy.

    And let’s be real: plenty of groups have motives to discourage everyday people from embracing AI.

    Political think tanks and content farms (Heritage Foundation, Koch networks...) already pay for astroturfing campaigns and troll farms. They do it on issues like immigration, climate, and COVID. Why would AI magically be exempt?

    Reputation management/PR firms (Bent Pixels, marketing shops, crisis comms firms) literally get paid to scrub and reshape narratives online. Their business model depends on you not having the same tools for cheap or free.

    Established media and gatekeepers survive on controlling distribution pipelines. The more people use AI to generate, remix, and distribute their own content, the less leverage those outlets have.

    Now why does this matter with AI in particular? Because AI isn’t just another app it’s a force multiplier for individuals.

    A single parent can spin up an online store, write copy, generate images, and market it without hiring an agency.

    A student can build an interactive study tool in a weekend that used to take a funded research lab.

    An activist group can draft policy briefs, make explainer videos, and coordinate messaging with almost no budget.

    These kinds of tools only get created if ordinary people are experimenting, collaborating, and embracing AI. That’s what the “don’t trust AI” narrative is designed to discourage. If you keep people from touching it, you keep them dependent on the existing gatekeepers.

    So flip your own question: who pays for these narratives? The same people who already fund copy-paste headline campaigns like “illegals are taking our jobs and assaulting Americans.” It’s the same yellow-journalism playbook, just aimed at a new target.

    Dismissing this as “cloud patterns” is the exact mindset they hope you have. Because if you actually acknowledge how coordinated media framing works, you start to see why of course there are groups with the motive and budget to poison the well on AI.

    Consider these recent examples:

    The pro-Russia “Operation Overload” campaign used free AI tools to push disinformation—including deepfakes and fake news sites—on a scale that catapulted from 230 to 587 unique content pieces in under a year .

    AI-generated bots and faux media orchestrated coordinated boycotts of Amazon and McDonald’s over DEI reversals—with no clear ideology, just engineered outrage .

    Social media networks ahead of the 2024 U.S. election were crawling with coordination networks sharing AI-generated manipulative images and narrative content and most such accounts remain active .

    Across the globe, AI deepfakes and election misinformation campaigns surged from France to Ghana to South Africa—showing clear strategic deployment, not random dissent .

    Because AI expands creative sovereignty. It enables:

    It keeps people bypass expensive gatekeepers and build tools, stories, and businesses.

    Activists and community groups to publish, advocate, and organize without top-down approval.

    Everyday people to become producers, not just consumers.

    The moment ordinary people gain these capabilities, the power structures that rely on gatekeeping be they think tanks, PR firms, old-guard media, or political operatives have every incentive to suppress or smear AI usage. That’s why “AI is dangerous” is convenient messaging for them.

    The real question isn’t whether cloud patterns are real it’s why shouldn’t we expect influential actors to use AI to shape perception, especially when it threatens their control?

    Lemmy isn’t just a random forum it’s one of the last bastions of “tech-savvy” community space outside the mainstream. That makes it a perfect target for poisoning the well campaigns. If you can seed anti-AI sentiment there, you don’t just reach casual users, you capture the early adopters and opinion leaders who influence the wider conversation.

    I haven't checked my feed. But good money I can find multiple "fuck AI" posts that sound similar to "they took our job"

  • So what you are saying is, my car is a typewriter?

    Did you just take a picture of your car's boobs at 60k/h? High speed boobs shots hahahaha

  • Did you just take a picture of your car's boobs at 60k/h? High speed boobs shots hahahaha

    Maybe 😛

    It's a legal requirement when your car hits 80085 that you must take a photo, it supersedes all other laws.

  • Money quote:

    Excel requires some skill to use (to the point where high-level Excel is a competitive sport), and AI is mostly an exercise in deskilling its users and humanity at large.

    Are you kidding? Microsoft has always been shit at math. According to Microsoft Excel, 2 + 2 = 12:04 AM Jan 1, 1900.

  • Are you kidding? Microsoft has always been shit at math. According to Microsoft Excel, 2 + 2 = 12:04 AM Jan 1, 1900.

    Integers are days in Excel, no? So I think 2+2= 12:00 AM Jan 5, 1900.

  • This is such a misguided article, sorry.

    Obviously you’d be an idiot to use AI to number crunch.

    But AI can be extremely useful for sentence analytics. For example, if you’re trying to classify user feedback as positive or negative and then derive categories from the masses of text and squash the text into those categories.

    Google Sheets already does tonnes of this and we’re not writing articles about it.

    Yeah, it's like complaining that a hammer isn't good at turning a screw. There's a whole trend of Chess content creators featuring games against ChatGPT where it forgets the position or plays illegal moves, and it just doesn't mean anything. ChatGPT was never designed or intended to be able to evaluate a chess position, and incidentally, we do have computer programs that do exactly that and have been better than any human player since the 1990s. So what is even the point?

  • Yeah, it's like complaining that a hammer isn't good at turning a screw. There's a whole trend of Chess content creators featuring games against ChatGPT where it forgets the position or plays illegal moves, and it just doesn't mean anything. ChatGPT was never designed or intended to be able to evaluate a chess position, and incidentally, we do have computer programs that do exactly that and have been better than any human player since the 1990s. So what is even the point?

    And what you could do is to enable an LLM to use these tools and reason about their outcome. Complaining that an LLM isn’t good at adding numbers is like complaining that humans aren’t as fast as calculators when multiplying large numbers.

  • Wrong, they already had that with Excel. There were a bunch of functions that delivered wrong returns for years, and none of the users (mostly economists) had noticed.

    What, you don't always work with 16 digit numbers that are automatically truncated? What could go wrong? We don't use 16 digit numbers for anything, really./

    It's hard to believe that's still a thing but it is!

  • That was partly a result of seeking explicit compatibility with Lotus, IIRC.

    seeking explicit compatibility with Lotus

    I need a shower.

  • 119 Stimmen
    18 Beiträge
    48 Aufrufe
    K
    This is so, so fucking stupid. Am i really supposed such a jarring and complete lack of ceitical thinking ability to believe that this is an issue worth writing an article about? To explain: of all the monumentally incompetent, illegal, mond-numbingly stupid, and agressively short-sighted things a trenager is inevitably going to type into a school laptop I'm supposed to believe that THIS, this is what is worth writing an article about? I do not disagree with what the author is teying to say but holy fucking shit what a load of pandering, steaming shit this is. Getting flagged for being trans is, like, one of a countless number of idiotic things a teenager could do with a school laptop with monitoring software installed that could potentially land them in hot water. The dangers of normalizing surveillance amongst students are so fucking multitudinous that to highlight any one of them is fuckjng pointless when addressing the root of it covers ALL of them. Including the subject of this article.
  • 985 Stimmen
    75 Beiträge
    282 Aufrufe
    H
    Agreed, if there was concern about the data falling into the wrong hands then there’s many different ways to secure the data (encryption w/ a secure enclave, masking, hardening) besides just deleting it. Tesla’s strategy here totally foregoes any typical data retention lifecycle like you mention, which is usually to delete old data that has little to no benefit besides just adding additional risk (e.g. trips older than 1 year or if there’s no space left). Plus you have to take into account the additional consequences you take on by deleting the data locally such as not being in compliance with regulations, and potentially risking sanctions or heavy fines.
  • 607 Stimmen
    417 Beiträge
    15k Aufrufe
    I
    why do allow a romantic partner to set boundaries on the potential relationships I could form with others it also just hurt to imagine him being with someone else and preferring them over me My problem is exclusivity being the standard or default requirement for almost everyone, in many case just because that's what everyone else is doing. This deletes, say 95% of the population. It's already a very improbable thing to hook up with someone compatible and have that requirement, unless you have a very high "hook up attempt" rate than you can just forget the whole thing as unrealistic, which I did a long time ago. It's just not going to happen, no interested, the terms are unacceptable I'm not even going to waste any time trying.
  • 1 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • Relo IT

    Technology technology
    1
    2
    1 Stimmen
    1 Beiträge
    21 Aufrufe
    Niemand hat geantwortet
  • Bill Gates and Linus Torvalds meet for the first time.

    Technology technology
    44
    2
    441 Stimmen
    44 Beiträge
    485 Aufrufe
    ?
    That must have taken some diplomacy, but it would have been even more impressive to have convinced Stallman to come too
  • 2 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • A World Without iPhones?

    Technology technology
    7
    34 Stimmen
    7 Beiträge
    87 Aufrufe
    S
    I believe the world was a better place before smartphones started dominating everyone's attention. It has had a profound impact on how people are socializing, and not in a positive way if you ask me.