Skip to content

Vibe coding service Replit deleted production database

Technology
118 73 0
  • Replit was pretty useful before vibe coding. How the mighty have fallen.

    First time I'm hearing them be related to vibe coding. They've been very respectable in the past, especially with their open-source CodeMirror.

  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    Well then, that settles it, this should never have happened.

    I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

    That goes for math, coding, health advice, etc.

    If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

    I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This baffles me. How can anyone see AI function in the wild and not conclude 1) it has no conscience, 2) it's free to do whatever it's empowered to do if it wants and 3) at some level its behavior is pseudorandom and/or probabilistic? We're figuratively rolling dice with this stuff.

  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This baffles me. How can anyone see AI function in the wild and not conclude 1) it has no conscience, 2) it's free to do whatever it's empowered to do if it wants and 3) at some level its behavior is pseudorandom and/or probabilistic? We're figuratively rolling dice with this stuff.

    It’s incredible that it works, it’s incredible what just encoding language can do, but it is not a rational thinking system.

    I don’t think most people care about the proverbial man behind the curtain, it talks like a human so it must be smart like a human.

  • It’s incredible that it works, it’s incredible what just encoding language can do, but it is not a rational thinking system.

    I don’t think most people care about the proverbial man behind the curtain, it talks like a human so it must be smart like a human.

    it talks like a human so it must be smart like a human.

    Yikes. Have those people... talked to other people before?

  • He had one db for prod and dev, no backup, llm went in override mode and delete it dev db as it is developing but oops that is the prod db. And oops o backup.

    Yeah it is the llm and replit’s faults. /s

    There was a backup, and it was restored. However, the LLM lied and said there wasn't at first. You can laugh all you want at it. I did. But maybe read the article so you aren't also lying.

  • it talks like a human so it must be smart like a human.

    Yikes. Have those people... talked to other people before?

    Smart is a relative term lol.

    A stupid human is still smart when compared to a jellyfish. That said, anybody who comes away from interactions with LLM's and thinks they're smart is only slightly more intelligent than a jellyfish.

  • Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn't really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it's able to solve more complex problems and perform more complex pattern matching.

    Well the thing is, LLMs don't seem to really "solve" complex problems. They remember solutions they've seen before.

    The example I saw was asking an LLM to solve "Towers of Hanoi" with 100 disks. This is a common recursive programming problem, takes quite a while for a human to write the answer to. The LLM manages this easily. But when asked to solve the same problem with with say 79 disks, or 41 disks, or some other oddball number, the LLM fails to solve the problem, despite it being simpler(!).

    It can do pattern matching and provide solutions, but it's not able to come up with truly new solutions. It does not "think" in that way. LLMs are amazing data storage formats, but they're not truly 'intelligent' in the way most people think.

  • in which the service admitted to “a catastrophic error of judgement”

    It’s fancy text completion - it does not have judgement.

    The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.

    judgement

    Yeah, it admitted to an error in judgement because the prompter clearly declared it so.

    Generally LLMs will make whatever statement about what has happened that you want it to say. If you told it it went fantastic, it would agree. If you told it that it went terribly, it will parrot that sentiment back.

    Which what seems to make it so dangerous for some people's mental health, a text generator that wants to agree with whatever you are saying, but doing so without verbatim copying so it gives an illusion of another thought process agreeing with them. Meanwhile, concurrent with your chat is another person starting from the exact same model getting a dialog that violently disagrees with the first person. It's an echo chamber.

  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    Well then, that settles it, this should never have happened.

    I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

    That goes for math, coding, health advice, etc.

    If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

    What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

  • There was a backup, and it was restored. However, the LLM lied and said there wasn't at first. You can laugh all you want at it. I did. But maybe read the article so you aren't also lying.

    Not according to the twitter thread. I went thru its thread, it’s a roller coaster of amateurism.

  • Well the thing is, LLMs don't seem to really "solve" complex problems. They remember solutions they've seen before.

    The example I saw was asking an LLM to solve "Towers of Hanoi" with 100 disks. This is a common recursive programming problem, takes quite a while for a human to write the answer to. The LLM manages this easily. But when asked to solve the same problem with with say 79 disks, or 41 disks, or some other oddball number, the LLM fails to solve the problem, despite it being simpler(!).

    It can do pattern matching and provide solutions, but it's not able to come up with truly new solutions. It does not "think" in that way. LLMs are amazing data storage formats, but they're not truly 'intelligent' in the way most people think.

    This only proves some of them can't solve all complex problems. I'm only claiming some of them can solve some complex problems. Not only by remembering exact solutions, but by remembering steps and actions used in building those solutions, generalizing, and transferring them to new problems. Anyone who tries using it for programming, will discover this very fast.

    PS: Some of them were already used to solve problems and find patterns in data humans weren't able to get other ways before (particle research in CERN, bioinformatics, etc).

  • Shit, deleting prod is my signature move! AI is coming for my job 😵

    Just know your worth. You can do it cheaper!

  • Not mad about an estimated usage bill of $8k per month.
    Just hire a developer

    But then how would he feel so special and smart about "doing it himself"???? Come on man, think of the rich fratboys!! They NEED to feel special and smart!!!

  • Title should be “user give database prod access to a llm which deleted the db, user did not have any backup and used the same db for prod and dev”. Less sexy and less llm fault.
    This is weird it’s like the last 50 years of software development principles are being ignored.

    But like the whole 'vibe coding' message is the LLM knows all this stuff so you don't have to.

    This isn't some "LLM can do some code completion/suggestions" it's "LLM is so magical you can be an idiot with no skills/training and still produce full stack solutions".

  • it talks like a human so it must be smart like a human.

    Yikes. Have those people... talked to other people before?

    Yes, and they were all as smart at humans. 😉

    So mostly average but some absolute thickos too.

  • This post did not contain any content.

    The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

  • The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

    The Pink Elephant problem of LLMs. You can not reliably make them NOT do something.

  • What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

    Vibe coding you do end up spending a lot of time waiting for prompts, so I get the results of that study.

    I fall pretty deep in the power user category for LLMs, so I don’t really feel that the study applies well to me, but also I acknowledge I can be biased there.

    I have custom proprietary MCPs for semantic search over my code bases that lets AI do repeated graph searches on my code (imagine combining language server, ctags, networkx, and grep+fuzzy search). That is way faster than iteratively grepping and code scanning manually with a low chance of LLM errors. By the time I open GitHub code search or run ripgrep Claude has used already prioritized and listed my modules to investigate.

    That tool alone with an LLM can save me half a day of research and debugging on complex tickets, which pays for an AI subscription alone. I have other internal tools to accelerate work too.

    I use it to organize my JIRA tickets and plan my daily goals. I actually get Claude to do a lot of triage for me before I even start a task, which cuts the investigation phase to a few minutes on small tasks.

    I use it to review all my PRs before I ask a human to look, it catches a lot of small things and can correct them, then the PR avoids the bike shedding nitpicks some reviewers love. Claude can do this, Copilot will only ever point out nitpicks, so the model makes a huge difference here. But regardless, 1 fewer review request cycle helps keep things moving.

    It’s a huge boon to debugging — much faster than searching errors manually. Especially helpful on the types of errors you have to rabbit hole GitHub issue content chains to solve.

    It’s very fast to get projects to MVP while following common structure/idioms, and can help write unit tests quickly for me. After the MVP stage it sucks and I go back to manually coding.

    I use it to generate code snippets where documentation sucks. If you look at the ibis library in Python for example the docs are Byzantine and poorly organized. LLMs are better at finding the relevant docs than I am there. I mostly use LLM search instead of manual for doc search now.

    I have a lot of custom scripts and calculators and apps that I made with it which keep me more focused on my actual work and accelerate things.

    I regularly have the LLM help me write bash or python or jq scripts when I need to audit codebases for large refactors. That’s low maintenance one off work that can be easily verified but complex to write. I never remember the syntax for bash and jq even after using them for years.

    I guess the short version is I tend to build tools for the AI, then let the LLM use those tools to improve and accelerate my workflows. That returns a lot of time back to me.

    I do try vibe coding but end up in the same time sink traps as the study found. If the LLM is ever wrong, you save time forking the chat than trying to realign it, but it’s still likely to be slower. Repeat chats result in the same pitfalls for complex issues and bugs, so you have to abandon that state quickly.

    Vibe coding small revisions can still be a bit faster and it’s great at helping me with documentation.

  • This post did not contain any content.

    It sounds like this guy was also relying on the AI to self-report status. Did any of this happen? Like is the replit AI really hooked up to a CLI, did it even make a DB to start with, was there anything useful in it, and did it actually delete it?

    Or is this all just a long roleplaying session where this guy pretends to run a business and the AI pretends to do employee stuff for him?

    Because 90% of this article is "I asked the AI and it said:" which is not a reliable source for information.

  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    Well then, that settles it, this should never have happened.

    I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

    That goes for math, coding, health advice, etc.

    If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

    When it comes to LLMs, they cannot do any work that you yourself do not understand.

    And even if they could how would you ever validate it if you can't understand it.

  • Why Every University Needs a Robust Library Software

    Technology technology
    2
    5 Stimmen
    2 Beiträge
    0 Aufrufe
    D
    What are you hoping to accomplish by pasting AI generated word soup here?
  • I build a YouTube to Transcript online tool

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet
  • 337 Stimmen
    19 Beiträge
    112 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • 294 Stimmen
    40 Beiträge
    151 Aufrufe
    Z
    The NUMBER FUCKING 1 RULE when we first got online. That all the normals repeated over and over and over. Then the se ond they get social media all that shit was flushed like a morning turd.
  • 327 Stimmen
    64 Beiträge
    431 Aufrufe
    B
    I get that, but it's more logical to me that of I'm going to whistleblow on a company to not use one of their devices to do it. That way it doesn't matter what apps are or are not secure, you're not using their device that can potentially track you.
  • 873 Stimmen
    107 Beiträge
    498 Aufrufe
    softestsapphic@lemmy.worldS
    How are they going to make money off of these projects if people can legally copy and redistribute them for free? The same reasons everyone doesn't already do this via pirating. You mean copy, not steal. When something is stolen from you, you no longer have it. Wow you are just a troll, thanks for showing me so I don't waste anymore time with you.
  • Apple Eyes Move to AI Search, Ending Era Defined by Google

    Technology technology
    2
    10 Stimmen
    2 Beiträge
    22 Aufrufe
    ohshit604@sh.itjust.worksO
    It’s infuriating that Safari/Apple only allows me to choose from five different search engines. I self-host my own SearXNG instance and have to use a third-party extension to redirect my queries.
  • 0 Stimmen
    2 Beiträge
    22 Aufrufe
    B
    ... robo chomo?