Skip to content

Vibe coding service Replit deleted production database

Technology
118 73 568
  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    Well then, that settles it, this should never have happened.

    I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

    That goes for math, coding, health advice, etc.

    If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

    I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This baffles me. How can anyone see AI function in the wild and not conclude 1) it has no conscience, 2) it's free to do whatever it's empowered to do if it wants and 3) at some level its behavior is pseudorandom and/or probabilistic? We're figuratively rolling dice with this stuff.

  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This baffles me. How can anyone see AI function in the wild and not conclude 1) it has no conscience, 2) it's free to do whatever it's empowered to do if it wants and 3) at some level its behavior is pseudorandom and/or probabilistic? We're figuratively rolling dice with this stuff.

    It’s incredible that it works, it’s incredible what just encoding language can do, but it is not a rational thinking system.

    I don’t think most people care about the proverbial man behind the curtain, it talks like a human so it must be smart like a human.

  • It’s incredible that it works, it’s incredible what just encoding language can do, but it is not a rational thinking system.

    I don’t think most people care about the proverbial man behind the curtain, it talks like a human so it must be smart like a human.

    it talks like a human so it must be smart like a human.

    Yikes. Have those people... talked to other people before?

  • He had one db for prod and dev, no backup, llm went in override mode and delete it dev db as it is developing but oops that is the prod db. And oops o backup.

    Yeah it is the llm and replit’s faults. /s

    There was a backup, and it was restored. However, the LLM lied and said there wasn't at first. You can laugh all you want at it. I did. But maybe read the article so you aren't also lying.

  • it talks like a human so it must be smart like a human.

    Yikes. Have those people... talked to other people before?

    Smart is a relative term lol.

    A stupid human is still smart when compared to a jellyfish. That said, anybody who comes away from interactions with LLM's and thinks they're smart is only slightly more intelligent than a jellyfish.

  • Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn't really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it's able to solve more complex problems and perform more complex pattern matching.

    Well the thing is, LLMs don't seem to really "solve" complex problems. They remember solutions they've seen before.

    The example I saw was asking an LLM to solve "Towers of Hanoi" with 100 disks. This is a common recursive programming problem, takes quite a while for a human to write the answer to. The LLM manages this easily. But when asked to solve the same problem with with say 79 disks, or 41 disks, or some other oddball number, the LLM fails to solve the problem, despite it being simpler(!).

    It can do pattern matching and provide solutions, but it's not able to come up with truly new solutions. It does not "think" in that way. LLMs are amazing data storage formats, but they're not truly 'intelligent' in the way most people think.

  • in which the service admitted to “a catastrophic error of judgement”

    It’s fancy text completion - it does not have judgement.

    The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.

    judgement

    Yeah, it admitted to an error in judgement because the prompter clearly declared it so.

    Generally LLMs will make whatever statement about what has happened that you want it to say. If you told it it went fantastic, it would agree. If you told it that it went terribly, it will parrot that sentiment back.

    Which what seems to make it so dangerous for some people's mental health, a text generator that wants to agree with whatever you are saying, but doing so without verbatim copying so it gives an illusion of another thought process agreeing with them. Meanwhile, concurrent with your chat is another person starting from the exact same model getting a dialog that violently disagrees with the first person. It's an echo chamber.

  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    Well then, that settles it, this should never have happened.

    I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

    That goes for math, coding, health advice, etc.

    If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

    What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

  • There was a backup, and it was restored. However, the LLM lied and said there wasn't at first. You can laugh all you want at it. I did. But maybe read the article so you aren't also lying.

    Not according to the twitter thread. I went thru its thread, it’s a roller coaster of amateurism.

  • Well the thing is, LLMs don't seem to really "solve" complex problems. They remember solutions they've seen before.

    The example I saw was asking an LLM to solve "Towers of Hanoi" with 100 disks. This is a common recursive programming problem, takes quite a while for a human to write the answer to. The LLM manages this easily. But when asked to solve the same problem with with say 79 disks, or 41 disks, or some other oddball number, the LLM fails to solve the problem, despite it being simpler(!).

    It can do pattern matching and provide solutions, but it's not able to come up with truly new solutions. It does not "think" in that way. LLMs are amazing data storage formats, but they're not truly 'intelligent' in the way most people think.

    This only proves some of them can't solve all complex problems. I'm only claiming some of them can solve some complex problems. Not only by remembering exact solutions, but by remembering steps and actions used in building those solutions, generalizing, and transferring them to new problems. Anyone who tries using it for programming, will discover this very fast.

    PS: Some of them were already used to solve problems and find patterns in data humans weren't able to get other ways before (particle research in CERN, bioinformatics, etc).

  • Shit, deleting prod is my signature move! AI is coming for my job 😵

    Just know your worth. You can do it cheaper!

  • Not mad about an estimated usage bill of $8k per month.
    Just hire a developer

    But then how would he feel so special and smart about "doing it himself"???? Come on man, think of the rich fratboys!! They NEED to feel special and smart!!!

  • Title should be “user give database prod access to a llm which deleted the db, user did not have any backup and used the same db for prod and dev”. Less sexy and less llm fault.
    This is weird it’s like the last 50 years of software development principles are being ignored.

    But like the whole 'vibe coding' message is the LLM knows all this stuff so you don't have to.

    This isn't some "LLM can do some code completion/suggestions" it's "LLM is so magical you can be an idiot with no skills/training and still produce full stack solutions".

  • it talks like a human so it must be smart like a human.

    Yikes. Have those people... talked to other people before?

    Yes, and they were all as smart at humans. 😉

    So mostly average but some absolute thickos too.

  • This post did not contain any content.

    The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

  • The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

    The Pink Elephant problem of LLMs. You can not reliably make them NOT do something.

  • What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

    Vibe coding you do end up spending a lot of time waiting for prompts, so I get the results of that study.

    I fall pretty deep in the power user category for LLMs, so I don’t really feel that the study applies well to me, but also I acknowledge I can be biased there.

    I have custom proprietary MCPs for semantic search over my code bases that lets AI do repeated graph searches on my code (imagine combining language server, ctags, networkx, and grep+fuzzy search). That is way faster than iteratively grepping and code scanning manually with a low chance of LLM errors. By the time I open GitHub code search or run ripgrep Claude has used already prioritized and listed my modules to investigate.

    That tool alone with an LLM can save me half a day of research and debugging on complex tickets, which pays for an AI subscription alone. I have other internal tools to accelerate work too.

    I use it to organize my JIRA tickets and plan my daily goals. I actually get Claude to do a lot of triage for me before I even start a task, which cuts the investigation phase to a few minutes on small tasks.

    I use it to review all my PRs before I ask a human to look, it catches a lot of small things and can correct them, then the PR avoids the bike shedding nitpicks some reviewers love. Claude can do this, Copilot will only ever point out nitpicks, so the model makes a huge difference here. But regardless, 1 fewer review request cycle helps keep things moving.

    It’s a huge boon to debugging — much faster than searching errors manually. Especially helpful on the types of errors you have to rabbit hole GitHub issue content chains to solve.

    It’s very fast to get projects to MVP while following common structure/idioms, and can help write unit tests quickly for me. After the MVP stage it sucks and I go back to manually coding.

    I use it to generate code snippets where documentation sucks. If you look at the ibis library in Python for example the docs are Byzantine and poorly organized. LLMs are better at finding the relevant docs than I am there. I mostly use LLM search instead of manual for doc search now.

    I have a lot of custom scripts and calculators and apps that I made with it which keep me more focused on my actual work and accelerate things.

    I regularly have the LLM help me write bash or python or jq scripts when I need to audit codebases for large refactors. That’s low maintenance one off work that can be easily verified but complex to write. I never remember the syntax for bash and jq even after using them for years.

    I guess the short version is I tend to build tools for the AI, then let the LLM use those tools to improve and accelerate my workflows. That returns a lot of time back to me.

    I do try vibe coding but end up in the same time sink traps as the study found. If the LLM is ever wrong, you save time forking the chat than trying to realign it, but it’s still likely to be slower. Repeat chats result in the same pitfalls for complex issues and bugs, so you have to abandon that state quickly.

    Vibe coding small revisions can still be a bit faster and it’s great at helping me with documentation.

  • This post did not contain any content.

    It sounds like this guy was also relying on the AI to self-report status. Did any of this happen? Like is the replit AI really hooked up to a CLI, did it even make a DB to start with, was there anything useful in it, and did it actually delete it?

    Or is this all just a long roleplaying session where this guy pretends to run a business and the AI pretends to do employee stuff for him?

    Because 90% of this article is "I asked the AI and it said:" which is not a reliable source for information.

  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    Well then, that settles it, this should never have happened.

    I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

    That goes for math, coding, health advice, etc.

    If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

    When it comes to LLMs, they cannot do any work that you yourself do not understand.

    And even if they could how would you ever validate it if you can't understand it.

  • The Pink Elephant problem of LLMs. You can not reliably make them NOT do something.

    Just say 12 times next time

  • escorte paris

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 87 Stimmen
    5 Beiträge
    42 Aufrufe
    L
    Can somebody TLDR and determine if there's any useful information in this article. I refuse to read quanta magazine. Edit: link to paper: https://eprint.iacr.org/2025/118
  • 314 Stimmen
    141 Beiträge
    960 Aufrufe
    zacryon@feddit.orgZ
    I see. If moving to another country, where you don't have to suffer such conditions, is also not an option then I hope you're looking for something else while you're at your current job. These are no conditions anyone should suffer.
  • We need to stop pretending AI is intelligent

    Technology technology
    331
    1
    1k Stimmen
    331 Beiträge
    2k Aufrufe
    dsilverz@friendica.worldD
    @technocrit While I agree with the main point that "AI/LLMs has/have no agency", I must be the boring, ackchyually person who points out and remembers some nerdy things.tl;dr: indeed, AIs and LLMs aren't intelligent... we aren't so intelligent as we think we are, either, because we hold no "exclusivity" of intelligence among biosphere (corvids, dolphins, etc) and because there's no such thing as non-deterministic "intelligence". We're just biologically compelled to think that we can think and we're the only ones to think, and this is just anthropocentric and naive from us (yeah, me included).If you have the patience to read a long and quite verbose text, it's below. If you don't, well, no problems, just stick to my tl;dr above.-----First and foremost, everything is ruled by physics. Deep down, everything is just energy and matter (the former of which, to quote the famous Einstein equation e = mc, is energy as well), and this inexorably includes living beings.Bodies, flesh, brains, nerves and other biological parts, they're not so different from a computer case, CPUs/NPUs/TPUs, cables and other computer parts: to quote Sagan, it's all "made of star stuff", it's all a bunch of quarks and other elementary particles clumped together and forming subatomic particles forming atoms forming molecules forming everything we know, including our very selves...Everything is compelled to follow the same laws of physics, everything is subjected to the same cosmic principles, everything is subjected to the same fundamental forces, everything is subjected to the same entropy, everything decays and ends (and this comment is just a reminder, a cosmic-wide Memento mori).It's bleak, but this is the cosmic reality: cosmos is simply indifferent to all existence, and we're essentially no different than our fancy "tools", be it the wheel, the hammer, the steam engine, the Voyager twins or the modern dystopian electronic devices crafted to follow pieces of logical instructions, some of which were labelled by developers as "Markov Chains" and "Artificial Neural Networks".Then, there's also the human non-exclusivity among the biosphere: corvids (especially Corvus moneduloides, the New Caleidonian crow) are scientifically known for their intelligence, so are dolphins, chimpanzees and many other eukaryotas. Humans love to think we're exclusive in that regard, but we're not, we're just fooling ourselves!IMHO, every time we try to argue "there's no intelligence beyond humans", it's highly anthropocentric and quite biased/bigoted against the countless other species that currently exist on Earth (and possibly beyond this Pale Blue Dot as well). We humans often forgot how we are species ourselves (taxonomically classified as "Homo sapiens"). We tend to carry on our biological existences as if we were some kind of "deities" or "extraterrestrials" among a "primitive, wild life".Furthermore, I can point out the myriad of philosophical points, such as the philosophical point raised by the mere mention of "senses" ("Because it’s bodiless. It has no senses, ..." "my senses deceive me" is the starting point for Cartesian (René Descartes) doubt. While Descarte's conclusion, "Cogito ergo sum", is highly anthropocentric, it's often ignored or forgotten by those who hold anthropocentric views on intelligence, as people often ground the seemingly "exclusive" nature of human intelligence on the ability to "feel".Many other philosophical musings deserve to be mentioned as well: lack of free will (stemming from the very fact that we were unable to choose our own births), the nature of "evil" (both the Hobbesian line regarding "human evilness" and the Epicurean paradox regarding "metaphysical evilness"), the social compliance (I must point out to documentaries from Derren Brown on this subject), the inevitability of Death, among other deep topics.All deep principles and ideas converging, IMHO, into the same bleak reality, one where we (supposedly "soul-bearing beings") are no different from a "souless" machine, because we're both part of an emergent phenomena (Ordo ab chao, the (apparent) order out of chaos) that has been taking place for Æons (billions of years and beyond, since the dawn of time itself).Yeah, I know how unpopular this worldview can be and how downvoted this comment will probably get. Still I don't care: someone who gazed into the abyss must remember how the abyss always gazes us, even those of us who didn't dare to gaze into the abyss yet.I'm someone compelled by my very neurodivergent nature to remember how we humans are just another fleeting arrangement of interconnected subsystems known as "biological organism", one of which "managed" to throw stuff beyond the atmosphere (spacecrafts) while still unable to understand ourselves. We're biologically programmed, just like the other living beings, to "fear Death", even though our very cells are programmed to terminate on a regular basis (apoptosis) and we're are subjected to the inexorable chronological falling towards "cosmic chaos" (entropy, as defined, "as time passes, the degree of disorder increases irreversibly").
  • Blocking real-world ads: is the future here?

    Technology technology
    33
    1
    198 Stimmen
    33 Beiträge
    176 Aufrufe
    S
    Also a work of fiction
  • 50 Stimmen
    9 Beiträge
    56 Aufrufe
    H
    Also fair
  • 157 Stimmen
    30 Beiträge
    144 Aufrufe
    D
    These are the 700 Actually Indians
  • How to delete your Twitter (or X) account

    Technology technology
    2
    1
    1 Stimmen
    2 Beiträge
    25 Aufrufe
    R
    I also need to know the way to delete twitter account of my brand : https://stylo.pk/ .