Skip to content

It Took Many Years And Billions Of Dollars, But Microsoft Finally Invented A Calculator That Is Wrong Sometimes

Technology
79 46 0
  • No, I'd definitely agree that AI sentiment overall is pretty negative. I am not such a hardliner, but they are definitely out there. I don't see it as astroturfing at all, to even suggest this is ironic because LLMs are the ultimate astroturfing tool. The institutions capable of astroturfing do support AI and are using it. What institution or organization are you accusing of anti-AI astroturfing, exactly? This question requires an answer for that claim to be taken seriously.

    IMO the problem is not LLMs itself, which are very compelling and interesting for strictly language processing and enable software usecases that were almost impossible to implement programmatically before; the problem is how LLMs are being used incorrectly for usecases that they are not suited for, due to the massive investment and hype. "We spent all this money on this so now we have to use it for everything". It's wrong. LLMs are not knowledge stores, they are provably bad at summarization and as a search interface, and they should especially not be used for decision making in any context. And people are reacting to the way LLMs are being forced into all of these roles.

    People also take strong issue with their perceived violation of intellectual property and training on copyrighted information, viewing AI generated arts as derivative and theft.

    Plus, there are very negative consequences to generative AI that aren't yet fully addressed. Environmental impact. Deepfakes. They're a propaganda machine; they can be censored and reflect biases of the institutions that control them. Parasocial relationships, misguided self-validating "therapy". They degrade human creativity and become a crutch. Impacts on education and cheating. Replacement of jobs and easier exploitation of workers. Surveillance.

    All of these things are valid and I hear them all from people around me, not just on the internet.

    You're probably debating a tool.

  • That explains why they didn't want anyone investigating the machines. Did proper authorities finally get access (no pun intended lol) to investigate? Or was that already known?

    No, this came out after the election was settled. There was a woman that maintained a website covering all these details called BlackBoxVoting or something like that (long gone now).

  • You're probably debating a tool.

    Honestly, probably lol

  • "Microsoft Excel is testing a new AI-powered function that can automatically fill cells in your spreadsheets."

    Every year, Microsoft gives me more reasons to permanently leave their products.

    Unfortunately, due to compatibility with financial and other Windows-only software I still need to run Windows, but I am down to two rigs and it might go down to one in the new year.

    A virtual machine running windows, to host just those apps, might be a good step away at this point.

  • Keep her!

    That's the plan! Married 13 years this year 😊

  • Money quote:

    Excel requires some skill to use (to the point where high-level Excel is a competitive sport), and AI is mostly an exercise in deskilling its users and humanity at large.

    Wrong, they already had that with Excel. There were a bunch of functions that delivered wrong returns for years, and none of the users (mostly economists) had noticed.

  • You really can’t imagine why corporations and political groups who spend billions paying people to manufacture narratives and flood feeds might hate the idea of ordinary people suddenly having their own free, on-demand content factory, fact-checker, and megaphone?

    That's on both sides of the political spectrum.
    These AI tools are not just Google chat. You can build with them rapidly. Is it some revolutionary thing? No

    But can it be a game changer in some areas? Absolutely.

    They moved rapidly with the media on this. Compare headlines for AI to any other yellow journalistic topic. They're identical

    In favour of AI absolutely, against it, no I can't. What group would want to disvalue AI, after all most of the big tech companies are developing their own. They would want people to use AI, that's the only way they make a profit.

    You keep providing these vague justifications for your belief but you never actually provide a concrete answer.

    Which groups in particular do you think are paying people to astroturf with negative AI comments? Which actual organisations, which companys? Do you have evidence for this beyond "lots of people on a technically inclined forum don't like it" because that seems to be a fairly self-selecting set. You are seeing patterns in the clouds and are insisting that they are meaningful.

  • Give Microsoft some credit! Excel has been able to come up with wrong answers for decades. For example, reporting 1900 as a leap year.

    That was partly a result of seeking explicit compatibility with Lotus, IIRC.

  • Money quote:

    Excel requires some skill to use (to the point where high-level Excel is a competitive sport), and AI is mostly an exercise in deskilling its users and humanity at large.

    This is such a misguided article, sorry.

    Obviously you’d be an idiot to use AI to number crunch.

    But AI can be extremely useful for sentence analytics. For example, if you’re trying to classify user feedback as positive or negative and then derive categories from the masses of text and squash the text into those categories.

    Google Sheets already does tonnes of this and we’re not writing articles about it.

  • apparently you should be able to run any windows app with WinApps on linux, but I think they have a bug or something right now because I haven't been able to get it to work.

    The problem is that they are still Microsoft applications. You can’t say “I’ll leave Microsoft!” and run their software in a Windows simulator anyway. That would be … inaccurate, to say the least.

  • That's my thinking

    If you know what you're doing, it's significantly easier to do it yourself

    You at least have some reassurance it's correct (or at least thought through)

    Verification is important, but I think you're omitting from your imagination a real and large category of people who have a basic familiarity with spreadsheets and computers, so are able to understand a potential solution and see whether it makes sense, but who do not have the ability to quickly come up with it themselves.

    In language it's the difference between receptive and productive vocabulary: there are words which you understand but which you would never say or write because they're part of your receptive, but not productive knowledge.

    There are times when this will go wrong, because the LLM will can produce something plausible but incorrect and such a person will fail to spot it. And of course if you blindly trust it with something you're not actually capable of (or willing to) check then you will also get bad results.

  • Money quote:

    Excel requires some skill to use (to the point where high-level Excel is a competitive sport), and AI is mostly an exercise in deskilling its users and humanity at large.

    Intel already did that in the 90's with the FDIV bug.

  • In favour of AI absolutely, against it, no I can't. What group would want to disvalue AI, after all most of the big tech companies are developing their own. They would want people to use AI, that's the only way they make a profit.

    You keep providing these vague justifications for your belief but you never actually provide a concrete answer.

    Which groups in particular do you think are paying people to astroturf with negative AI comments? Which actual organisations, which companys? Do you have evidence for this beyond "lots of people on a technically inclined forum don't like it" because that seems to be a fairly self-selecting set. You are seeing patterns in the clouds and are insisting that they are meaningful.

    You call it “patterns in the clouds,” but that’s how coordinated media campaigns are meant to look organic, coincidental, invisible unless you recognize the fingerprints. Spotting those fingerprints isn’t tinfoil-hat stuff, it’s basic media literacy.

    And let’s be real: plenty of groups have motives to discourage everyday people from embracing AI.

    Political think tanks and content farms (Heritage Foundation, Koch networks...) already pay for astroturfing campaigns and troll farms. They do it on issues like immigration, climate, and COVID. Why would AI magically be exempt?

    Reputation management/PR firms (Bent Pixels, marketing shops, crisis comms firms) literally get paid to scrub and reshape narratives online. Their business model depends on you not having the same tools for cheap or free.

    Established media and gatekeepers survive on controlling distribution pipelines. The more people use AI to generate, remix, and distribute their own content, the less leverage those outlets have.

    Now why does this matter with AI in particular? Because AI isn’t just another app it’s a force multiplier for individuals.

    A single parent can spin up an online store, write copy, generate images, and market it without hiring an agency.

    A student can build an interactive study tool in a weekend that used to take a funded research lab.

    An activist group can draft policy briefs, make explainer videos, and coordinate messaging with almost no budget.

    These kinds of tools only get created if ordinary people are experimenting, collaborating, and embracing AI. That’s what the “don’t trust AI” narrative is designed to discourage. If you keep people from touching it, you keep them dependent on the existing gatekeepers.

    So flip your own question: who pays for these narratives? The same people who already fund copy-paste headline campaigns like “illegals are taking our jobs and assaulting Americans.” It’s the same yellow-journalism playbook, just aimed at a new target.

    Dismissing this as “cloud patterns” is the exact mindset they hope you have. Because if you actually acknowledge how coordinated media framing works, you start to see why of course there are groups with the motive and budget to poison the well on AI.

  • You call it “patterns in the clouds,” but that’s how coordinated media campaigns are meant to look organic, coincidental, invisible unless you recognize the fingerprints. Spotting those fingerprints isn’t tinfoil-hat stuff, it’s basic media literacy.

    And let’s be real: plenty of groups have motives to discourage everyday people from embracing AI.

    Political think tanks and content farms (Heritage Foundation, Koch networks...) already pay for astroturfing campaigns and troll farms. They do it on issues like immigration, climate, and COVID. Why would AI magically be exempt?

    Reputation management/PR firms (Bent Pixels, marketing shops, crisis comms firms) literally get paid to scrub and reshape narratives online. Their business model depends on you not having the same tools for cheap or free.

    Established media and gatekeepers survive on controlling distribution pipelines. The more people use AI to generate, remix, and distribute their own content, the less leverage those outlets have.

    Now why does this matter with AI in particular? Because AI isn’t just another app it’s a force multiplier for individuals.

    A single parent can spin up an online store, write copy, generate images, and market it without hiring an agency.

    A student can build an interactive study tool in a weekend that used to take a funded research lab.

    An activist group can draft policy briefs, make explainer videos, and coordinate messaging with almost no budget.

    These kinds of tools only get created if ordinary people are experimenting, collaborating, and embracing AI. That’s what the “don’t trust AI” narrative is designed to discourage. If you keep people from touching it, you keep them dependent on the existing gatekeepers.

    So flip your own question: who pays for these narratives? The same people who already fund copy-paste headline campaigns like “illegals are taking our jobs and assaulting Americans.” It’s the same yellow-journalism playbook, just aimed at a new target.

    Dismissing this as “cloud patterns” is the exact mindset they hope you have. Because if you actually acknowledge how coordinated media framing works, you start to see why of course there are groups with the motive and budget to poison the well on AI.

    Consider these recent examples:

    The pro-Russia “Operation Overload” campaign used free AI tools to push disinformation—including deepfakes and fake news sites—on a scale that catapulted from 230 to 587 unique content pieces in under a year .

    AI-generated bots and faux media orchestrated coordinated boycotts of Amazon and McDonald’s over DEI reversals—with no clear ideology, just engineered outrage .

    Social media networks ahead of the 2024 U.S. election were crawling with coordination networks sharing AI-generated manipulative images and narrative content and most such accounts remain active .

    Across the globe, AI deepfakes and election misinformation campaigns surged from France to Ghana to South Africa—showing clear strategic deployment, not random dissent .

    Because AI expands creative sovereignty. It enables:

    It keeps people bypass expensive gatekeepers and build tools, stories, and businesses.

    Activists and community groups to publish, advocate, and organize without top-down approval.

    Everyday people to become producers, not just consumers.

    The moment ordinary people gain these capabilities, the power structures that rely on gatekeeping be they think tanks, PR firms, old-guard media, or political operatives have every incentive to suppress or smear AI usage. That’s why “AI is dangerous” is convenient messaging for them.

    The real question isn’t whether cloud patterns are real it’s why shouldn’t we expect influential actors to use AI to shape perception, especially when it threatens their control?

    Lemmy isn’t just a random forum it’s one of the last bastions of “tech-savvy” community space outside the mainstream. That makes it a perfect target for poisoning the well campaigns. If you can seed anti-AI sentiment there, you don’t just reach casual users, you capture the early adopters and opinion leaders who influence the wider conversation.

    I haven't checked my feed. But good money I can find multiple "fuck AI" posts that sound similar to "they took our job"

  • TikTok Shop Sells Viral GPS Trackers Marketed to Stalkers

    Technology technology
    54
    1
    246 Stimmen
    54 Beiträge
    13 Aufrufe
    M
    The app broke for a few days with a message kissing Dump's ass, and when it came back, all videos that mentioned fascism had been removed
  • Nourishing Naturally: Goat Milk Market Insights

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    8 Aufrufe
    Niemand hat geantwortet
  • Thinking Is Becoming a Luxury Good

    Technology technology
    30
    65 Stimmen
    30 Beiträge
    337 Aufrufe
    S
    In political science, the term polyarchy (poly "many", arkhe "rule") was used by Robert A. Dahl to describe a form of government in which power is invested in multiple people. It takes the form of neither a dictatorship nor a democracy. This form of government was first implemented in the United States and France and gradually adopted by other countries. Polyarchy is different from democracy, according to Dahl, because the fundamental democratic principle is "the continuing responsiveness of the government to the preferences of its citizens, considered as political equals" with unimpaired opportunities. A polyarchy is a form of government that has certain procedures that are necessary conditions for following the democratic principle. So yeah, you are right. A representative "democracy" is not a democracy. It's a monarchy with more than one ruler. A gummy bear is as much a bear as representative democracy is a democracy. I didn't know that, because i was taught in school that a representative "democracy" is a form of democracy. And the name makes it sound like one. But it isn't. It's not even supposed to be in theory. I am sure 99% of people living in a representative "democracy" don't know this. I hereby encourage everyone to abandon the word representative "democracy" in favor of polyarchy or maybe oligarchy. This makes it much clearer what we are talking about. Also i doubt the authors of this article know this, because they imply that representative "democracy" is desirable, but it is obviously undesirable.
  • 86 Stimmen
    31 Beiträge
    329 Aufrufe
    A
    You don’t have the power to decarbonize all electricity From the article: Location also affects how carbon emissions are managed. Germany has the largest carbon footprint for video streaming at 76g CO₂e per hour of streaming, reflecting its continued reliance on coal and fossil fuels. In the UK, this figure is 48g CO₂e per hour, because its energy mix includes renewables and natural gas, increasingly with nuclear as central to the UK’s low-carbon future. France, with a reliance on nuclear is the lowest, at 10g CO₂e per hour. This is a massive difference, and clearly doable, nothing that would be limited to the distant future. So I get this right? I'm naive for expecting govt regulations to put companies' behaviour under control, whereas you're realistic by expecting hundreds of millions of people deciding to systematically minimise their Youtube/Tiktok/Spotify/Netflix/Zoom usage? Hmm, alright. And yet in an another comment you also expect that Spotify shouldn't introduce video streaming, without any external regulation but out of pure goodness of their hearts?
  • Password manager by Amazon

    Technology technology
    150
    2
    534 Stimmen
    150 Beiträge
    2k Aufrufe
    cralex@lemmy.zipC
    My handwriting comes with free encryption at rest. Even I might not be able to read it.
  • 33 Stimmen
    13 Beiträge
    164 Aufrufe
    maggiwuerze@feddit.orgM
    2x Fn on MacBooks
  • In Militarizing Push, Russian School Children To Build Drones

    Technology technology
    37
    1
    263 Stimmen
    37 Beiträge
    527 Aufrufe
    Z
    https://yourlogicalfallacyis.com/tu-quoque
  • 0 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet