Skip to content

Generative AI's most prominent skeptic doubles down

Technology
14 11 10
  • Vancouver (AFP) – Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers.

    Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation.

    Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.

    "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.

    Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.

    The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises.

    "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."

    His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise.

    Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.

    That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.

    Yet for all the hype, the practical gains remain limited.

    The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.

    Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will.

    "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained.

    This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources."

    Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude.

    He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."

    This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes.

    Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate.

    Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet."

    Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.

    "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society.

    "They have all this private data, so they can sell that as a consolation prize."

    Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much.

    "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said.

    "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

  • Vancouver (AFP) – Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers.

    Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation.

    Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.

    "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.

    Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.

    The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises.

    "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."

    His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise.

    Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.

    That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.

    Yet for all the hype, the practical gains remain limited.

    The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.

    Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will.

    "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained.

    This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources."

    Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude.

    He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."

    This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes.

    Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate.

    Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet."

    Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.

    "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society.

    "They have all this private data, so they can sell that as a consolation prize."

    Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much.

    "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said.

    "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

    "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there.

    Ehhh. I don't know if I'd go quite that far. I think that LLMs might be a component of a functional AGI. But just running a larger LLM model on bigger hardware is not going to suddenly barf out something that can act in the same sort of general way a human can, I agree there.

  • "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there.

    Ehhh. I don't know if I'd go quite that far. I think that LLMs might be a component of a functional AGI. But just running a larger LLM model on bigger hardware is not going to suddenly barf out something that can act in the same sort of general way a human can, I agree there.

    If AGI is made of components, you could argue that it isn't "general". Which would be fine, as most psychologists would say the same about human intelligence

  • Vancouver (AFP) – Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers.

    Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation.

    Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.

    "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.

    Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.

    The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises.

    "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."

    His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise.

    Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.

    That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.

    Yet for all the hype, the practical gains remain limited.

    The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.

    Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will.

    "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained.

    This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources."

    Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude.

    He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."

    This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes.

    Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate.

    Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet."

    Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.

    "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society.

    "They have all this private data, so they can sell that as a consolation prize."

    Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much.

    "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said.

    "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

    I'm a fan of AI, but I still think this guy is right as far as investment and hype goes. It's a useful tool. It cannot do all the things they are promising well. Both can be true.

  • "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there.

    Ehhh. I don't know if I'd go quite that far. I think that LLMs might be a component of a functional AGI. But just running a larger LLM model on bigger hardware is not going to suddenly barf out something that can act in the same sort of general way a human can, I agree there.

    AGI is artificial GENERAL intelligence,
    The subject of the article is GENERATIVE AI or GAI.

    GAI: is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data.

    (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks

  • Vancouver (AFP) – Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers.

    Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation.

    Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.

    "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.

    Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.

    The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises.

    "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."

    His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise.

    Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.

    That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.

    Yet for all the hype, the practical gains remain limited.

    The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.

    Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will.

    "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained.

    This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources."

    Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude.

    He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."

    This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes.

    Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate.

    Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet."

    Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.

    "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society.

    "They have all this private data, so they can sell that as a consolation prize."

    Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much.

    "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said.

    "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

    Yet for all the hype, the practical gains remain limited.

    . . . He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."

    This. Plus everyone’s selling the same thing and none of it can be sold at a profit.

    It’s just like watching people use lots of feathers on sticks to try and fly. Flight will probably be possible someday, but not like that, and if you idiots are going all in on feather-sticks now, you’re horrible businesspeople.

  • If AGI is made of components, you could argue that it isn't "general". Which would be fine, as most psychologists would say the same about human intelligence

    If AGI is made of components, you could argue that it isn’t “general”.

    Can you elaborate? I don't get what you mean by this.

  • If AGI is made of components, you could argue that it isn’t “general”.

    Can you elaborate? I don't get what you mean by this.

    The idea of general intelligence (g) is that you've got some overall capacity that can be turned to any task. Human intelligence is probably much more like a big toolbox of skills, though, and I can't see a version of computer "intelligence" that's any different to that. I worry that people who get excited about AI are kidding themselves a bit as it isn't going to be general - it's going to be a toolbox at best: an LLM for writing, a totally different system for drawing, another for identifying birdsong, and and yet another for maths... And at that point you've not got some special interesting AGI - you've just reinvented the idea of apps

  • AGI is artificial GENERAL intelligence,
    The subject of the article is GENERATIVE AI or GAI.

    GAI: is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data.

    (AGI)—sometimes called human‑level intelligence AI—is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks

    The article literally mentioned AGI, which is why he commented about the relationship between LLM and AGI. I agree with him, LLM by itself will never be an AGI, but it can definitely be a part of it, especially if it takes the role similar to the human language centre.

  • If AGI is made of components, you could argue that it isn't "general". Which would be fine, as most psychologists would say the same about human intelligence

    Isn’t human intelligence exactly what most people mean by "general intelligence"? It becomes ASI (Artificial Superintelligence) once its capabilities surpass those of humans - which I’d argue would happen almost immediately.

  • The idea of general intelligence (g) is that you've got some overall capacity that can be turned to any task. Human intelligence is probably much more like a big toolbox of skills, though, and I can't see a version of computer "intelligence" that's any different to that. I worry that people who get excited about AI are kidding themselves a bit as it isn't going to be general - it's going to be a toolbox at best: an LLM for writing, a totally different system for drawing, another for identifying birdsong, and and yet another for maths... And at that point you've not got some special interesting AGI - you've just reinvented the idea of apps

    I think you've misunderstood what AGI means.

  • If AGI is made of components, you could argue that it isn't "general". Which would be fine, as most psychologists would say the same about human intelligence

    The human brain is made of components, so why not an AI brain?

  • Vancouver (AFP) – Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers.

    Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation.

    Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.

    "Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.

    Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.

    The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises.

    "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."

    His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise.

    Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.

    That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.

    Yet for all the hype, the practical gains remain limited.

    The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.

    Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will.

    "One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained.

    This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources."

    Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude.

    He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."

    This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes.

    Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate.

    Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet."

    Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.

    "The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society.

    "They have all this private data, so they can sell that as a consolation prize."

    Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much.

    "They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said.

    "But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."

    I'd argue that image generation does have a use, especially for generating reference images. Sure, there might be some weirdness, but you can fix that when you do whatever art you're using it for.

  • "I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there.

    Ehhh. I don't know if I'd go quite that far. I think that LLMs might be a component of a functional AGI. But just running a larger LLM model on bigger hardware is not going to suddenly barf out something that can act in the same sort of general way a human can, I agree there.

    I don't think so, and I believe not even the current technology used for neural network simulations will bring us to AGI, yet alone LLMs.

  • 18 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • The End of Publishing as We Know It

    Technology technology
    10
    1
    51 Stimmen
    10 Beiträge
    1 Aufrufe
    beejjorgensen@lemmy.sdf.orgB
    Lol.. I wanted "DRM". But it's been a long day.
  • 53 Stimmen
    19 Beiträge
    10 Aufrufe
    Z
    What is the technology angle here? What does this have to do with technology?
  • 89 Stimmen
    5 Beiträge
    1 Aufrufe
    lupusblackfur@lemmy.worldL
    Zuck can't be too excited to be suddenly and harshly cut out of the Oval Office Data Pipeline...
  • How can websites verify unique (IRL) identities?

    Technology technology
    6
    8 Stimmen
    6 Beiträge
    3 Aufrufe
    H
    Safe, yeah. Private, no. If you want to verify whether a user is a real person, you need very personally identifiable information. That’s not ever going to be private. The best you could do, in theory, is have a government service that takes that PII and gives the user a signed cryptographic certificate they can use to verify their identity. Most people would either lose their private key or have it stolen, so even that system would have problems. The closest to reality you could do right now is use Apple’s FaceID, and that’s anything but private. Pretty safe though. It’s super illegal and quite hard to steal someone’s face.
  • 149 Stimmen
    19 Beiträge
    11 Aufrufe
    C
    Got it, at that point (extremely high voltage) you'd need suppression at the panel. Which I would hope people have inline, but not expect like an LVD.
  • The mystery of $MELANIA

    Technology technology
    13
    1
    25 Stimmen
    13 Beiträge
    10 Aufrufe
    geekwithsoul@lemm.eeG
    Archive
  • Microsoft Bans Employees From Using DeepSeek App

    Technology technology
    11
    1
    121 Stimmen
    11 Beiträge
    6 Aufrufe
    L
    (Premise - suppose I accept that there is such a definable thing as capitalism) I'm not sure why you feel the need to state this in a discussion that already assumes it as a necessary precondition of, but, uh, you do you. People blaming capitalism for everything then build a country that imports grain, while before them and after them it’s among the largest exporters on the planet (if we combine Russia and Ukraine for the “after” metric, no pun intended). ...what? What does this have to do with literally anything, much less my comment about innovation/competition? Even setting aside the wild-assed assumptions you're making about me criticizing capitalism means I 'blame [it] for everything', this tirade you've launched into, presumably about Ukraine and the USSR, has no bearing on anything even tangentially related to this conversation. People praising capitalism create conditions in which there’s no reason to praise it. Like, it’s competitive - they kill competitiveness with patents, IP, very complex legal systems. It’s self-regulating and self-optimizing - they make regulations and do bailouts preventing sick companies from dying, make laws after their interests, then reactively make regulations to make conditions with them existing bearable, which have a side effect of killing smaller companies. Please allow me to reiterate: ...what? Capitalists didn't build literally any of those things, governments did, and capitalists have been trying to escape, subvert, or dismantle those systems at every turn, so this... vain, confusing attempt to pin a medal on capitalism's chest for restraining itself is not only wrong, it fails to understand basic facts about history. It's the opposite of self-regulating because it actively seeks to dismantle regulations (environmental, labor, wage, etc), and the only thing it optimizes for is the wealth of oligarchs, and maybe if they're lucky, there will be a few crumbs left over for their simps. That’s the problem, both “socialist” and “capitalist” ideal systems ignore ape power dynamics. I'm going to go ahead an assume that 'the problem' has more to do with assuming that complex interacting systems can be simplified to 'ape (or any other animal's) power dynamics' than with failing to let the richest people just do whatever they want. Such systems should be designed on top of the fact that jungle law is always allowed So we should just be cool with everybody being poor so Jeff Bezos or whoever can upgrade his megayacht to a gigayacht or whatever? Let me say this in the politest way I know how: LOL no. Also, do you remember when I said this? ‘Won’t someone please think of the billionaires’ is wearing kinda thin You know, right before you went on this very long-winded, surreal, barely-coherent ramble? Did you imagine I would be convinced by literally any of it when all it amounts to is one giant, extraneous, tedious equivalent of 'Won't someone please think of the billionaires?' Simp harder and I bet maybe you can get a crumb or two yourself.