Skip to content

Vibe coding service Replit deleted production database

Technology
118 73 449
  • Shit, deleting prod is my signature move! AI is coming for my job 😵

    Just know your worth. You can do it cheaper!

  • Not mad about an estimated usage bill of $8k per month.
    Just hire a developer

    But then how would he feel so special and smart about "doing it himself"???? Come on man, think of the rich fratboys!! They NEED to feel special and smart!!!

  • Title should be “user give database prod access to a llm which deleted the db, user did not have any backup and used the same db for prod and dev”. Less sexy and less llm fault.
    This is weird it’s like the last 50 years of software development principles are being ignored.

    But like the whole 'vibe coding' message is the LLM knows all this stuff so you don't have to.

    This isn't some "LLM can do some code completion/suggestions" it's "LLM is so magical you can be an idiot with no skills/training and still produce full stack solutions".

  • it talks like a human so it must be smart like a human.

    Yikes. Have those people... talked to other people before?

    Yes, and they were all as smart at humans. 😉

    So mostly average but some absolute thickos too.

  • This post did not contain any content.

    The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

  • The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

    The Pink Elephant problem of LLMs. You can not reliably make them NOT do something.

  • What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

    Vibe coding you do end up spending a lot of time waiting for prompts, so I get the results of that study.

    I fall pretty deep in the power user category for LLMs, so I don’t really feel that the study applies well to me, but also I acknowledge I can be biased there.

    I have custom proprietary MCPs for semantic search over my code bases that lets AI do repeated graph searches on my code (imagine combining language server, ctags, networkx, and grep+fuzzy search). That is way faster than iteratively grepping and code scanning manually with a low chance of LLM errors. By the time I open GitHub code search or run ripgrep Claude has used already prioritized and listed my modules to investigate.

    That tool alone with an LLM can save me half a day of research and debugging on complex tickets, which pays for an AI subscription alone. I have other internal tools to accelerate work too.

    I use it to organize my JIRA tickets and plan my daily goals. I actually get Claude to do a lot of triage for me before I even start a task, which cuts the investigation phase to a few minutes on small tasks.

    I use it to review all my PRs before I ask a human to look, it catches a lot of small things and can correct them, then the PR avoids the bike shedding nitpicks some reviewers love. Claude can do this, Copilot will only ever point out nitpicks, so the model makes a huge difference here. But regardless, 1 fewer review request cycle helps keep things moving.

    It’s a huge boon to debugging — much faster than searching errors manually. Especially helpful on the types of errors you have to rabbit hole GitHub issue content chains to solve.

    It’s very fast to get projects to MVP while following common structure/idioms, and can help write unit tests quickly for me. After the MVP stage it sucks and I go back to manually coding.

    I use it to generate code snippets where documentation sucks. If you look at the ibis library in Python for example the docs are Byzantine and poorly organized. LLMs are better at finding the relevant docs than I am there. I mostly use LLM search instead of manual for doc search now.

    I have a lot of custom scripts and calculators and apps that I made with it which keep me more focused on my actual work and accelerate things.

    I regularly have the LLM help me write bash or python or jq scripts when I need to audit codebases for large refactors. That’s low maintenance one off work that can be easily verified but complex to write. I never remember the syntax for bash and jq even after using them for years.

    I guess the short version is I tend to build tools for the AI, then let the LLM use those tools to improve and accelerate my workflows. That returns a lot of time back to me.

    I do try vibe coding but end up in the same time sink traps as the study found. If the LLM is ever wrong, you save time forking the chat than trying to realign it, but it’s still likely to be slower. Repeat chats result in the same pitfalls for complex issues and bugs, so you have to abandon that state quickly.

    Vibe coding small revisions can still be a bit faster and it’s great at helping me with documentation.

  • This post did not contain any content.

    It sounds like this guy was also relying on the AI to self-report status. Did any of this happen? Like is the replit AI really hooked up to a CLI, did it even make a DB to start with, was there anything useful in it, and did it actually delete it?

    Or is this all just a long roleplaying session where this guy pretends to run a business and the AI pretends to do employee stuff for him?

    Because 90% of this article is "I asked the AI and it said:" which is not a reliable source for information.

  • I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    Well then, that settles it, this should never have happened.

    I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.

    That goes for math, coding, health advice, etc.

    If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.

    When it comes to LLMs, they cannot do any work that you yourself do not understand.

    And even if they could how would you ever validate it if you can't understand it.

  • The Pink Elephant problem of LLMs. You can not reliably make them NOT do something.

    Just say 12 times next time

  • Vibe coding you do end up spending a lot of time waiting for prompts, so I get the results of that study.

    I fall pretty deep in the power user category for LLMs, so I don’t really feel that the study applies well to me, but also I acknowledge I can be biased there.

    I have custom proprietary MCPs for semantic search over my code bases that lets AI do repeated graph searches on my code (imagine combining language server, ctags, networkx, and grep+fuzzy search). That is way faster than iteratively grepping and code scanning manually with a low chance of LLM errors. By the time I open GitHub code search or run ripgrep Claude has used already prioritized and listed my modules to investigate.

    That tool alone with an LLM can save me half a day of research and debugging on complex tickets, which pays for an AI subscription alone. I have other internal tools to accelerate work too.

    I use it to organize my JIRA tickets and plan my daily goals. I actually get Claude to do a lot of triage for me before I even start a task, which cuts the investigation phase to a few minutes on small tasks.

    I use it to review all my PRs before I ask a human to look, it catches a lot of small things and can correct them, then the PR avoids the bike shedding nitpicks some reviewers love. Claude can do this, Copilot will only ever point out nitpicks, so the model makes a huge difference here. But regardless, 1 fewer review request cycle helps keep things moving.

    It’s a huge boon to debugging — much faster than searching errors manually. Especially helpful on the types of errors you have to rabbit hole GitHub issue content chains to solve.

    It’s very fast to get projects to MVP while following common structure/idioms, and can help write unit tests quickly for me. After the MVP stage it sucks and I go back to manually coding.

    I use it to generate code snippets where documentation sucks. If you look at the ibis library in Python for example the docs are Byzantine and poorly organized. LLMs are better at finding the relevant docs than I am there. I mostly use LLM search instead of manual for doc search now.

    I have a lot of custom scripts and calculators and apps that I made with it which keep me more focused on my actual work and accelerate things.

    I regularly have the LLM help me write bash or python or jq scripts when I need to audit codebases for large refactors. That’s low maintenance one off work that can be easily verified but complex to write. I never remember the syntax for bash and jq even after using them for years.

    I guess the short version is I tend to build tools for the AI, then let the LLM use those tools to improve and accelerate my workflows. That returns a lot of time back to me.

    I do try vibe coding but end up in the same time sink traps as the study found. If the LLM is ever wrong, you save time forking the chat than trying to realign it, but it’s still likely to be slower. Repeat chats result in the same pitfalls for complex issues and bugs, so you have to abandon that state quickly.

    Vibe coding small revisions can still be a bit faster and it’s great at helping me with documentation.

    Don't you have any security concerns with sending all your code and JIRA tickets to some companies servers? My boss wouldn't be pleased if I send anything that's deemed a company secret over unencrypted channels.

  • First time I'm hearing them be related to vibe coding. They've been very respectable in the past, especially with their open-source CodeMirror.

    Yeah they limited people to 3 projects and pushed AI into front at some point.

    They advertise themselves as a CLOUD IDE POWERED BY AI now.

  • What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

    I'm not the person you're replying to but the one thing I've found them helpful for is targeted search.

    I can ask it a question and then access its sources from whatever response it generates to read and review myself.

    Kind of a simpler, free LexisNexis.

  • This post did not contain any content.

    The founder of SaaS business development outfit SaaStr has claimed AI coding tool Replit deleted a database despite his instructions not to change any code without permission.

    Sounds like an absolute diSaaStr...

  • It sounds like this guy was also relying on the AI to self-report status. Did any of this happen? Like is the replit AI really hooked up to a CLI, did it even make a DB to start with, was there anything useful in it, and did it actually delete it?

    Or is this all just a long roleplaying session where this guy pretends to run a business and the AI pretends to do employee stuff for him?

    Because 90% of this article is "I asked the AI and it said:" which is not a reliable source for information.

    It seemed like the llm had decided it was in a brat scene and was trying to call down the thunder.

  • Don't you have any security concerns with sending all your code and JIRA tickets to some companies servers? My boss wouldn't be pleased if I send anything that's deemed a company secret over unencrypted channels.

    The tool isn’t returning all code, but it is sending code.

    I had discussions with my CTO and security team before integrating Claude code.

    I have to use Gemini in one specific workflow and Gemini had a lot of landlines for how they use your data. Anthropic was easier to understand.

    Anthropic also has some guidance for running Claude Code in a container with firewall and your specified dev tools, it works but that’s not my area of expertise.

    The container doesn’t solve all the issues like using remote servers, but it does let you restrict what files and network requests Claude can access (so e.g. Claude can’t read your env vars or ssh key files).

    I do try local LLMs but they’re not there yet on my machine for most use cases. Gemma 3n is decent if you need small model performance and tool calls, phi4 works but isn’t thinking (the thinking variants are awful), and I’m exploring dream coder and diffusion models. R1 is still one of the best local models but frequently overthinks, even the new release. Context window is the largest limiting factor I find locally.

  • This post did not contain any content.

    The world's most overconfident virtual intern strikes again.

    Also, who the flying fuck are either of these companies? 1000 records is nothing. That's a fucking text file.

  • They could hire on a contractor and eschew all those costs.

    I’ve done contract work before, this seems a good fit (defined problem plus budget, unknown timeline, clear requirements)

    That's what I meant by hiring a self-employed freelancer. I don't know a lot about contracting so maybe I used the wrong phrase.

  • I'm not the person you're replying to but the one thing I've found them helpful for is targeted search.

    I can ask it a question and then access its sources from whatever response it generates to read and review myself.

    Kind of a simpler, free LexisNexis.

    One built a bunch of local search tools with MCP and that’s where I get a lot of my value out of it

    RAG workflows are incredibly useful and with modern agents and tool calls work very well.

    They kind of went out of style but it’s a perfect use case.

  • The tool isn’t returning all code, but it is sending code.

    I had discussions with my CTO and security team before integrating Claude code.

    I have to use Gemini in one specific workflow and Gemini had a lot of landlines for how they use your data. Anthropic was easier to understand.

    Anthropic also has some guidance for running Claude Code in a container with firewall and your specified dev tools, it works but that’s not my area of expertise.

    The container doesn’t solve all the issues like using remote servers, but it does let you restrict what files and network requests Claude can access (so e.g. Claude can’t read your env vars or ssh key files).

    I do try local LLMs but they’re not there yet on my machine for most use cases. Gemma 3n is decent if you need small model performance and tool calls, phi4 works but isn’t thinking (the thinking variants are awful), and I’m exploring dream coder and diffusion models. R1 is still one of the best local models but frequently overthinks, even the new release. Context window is the largest limiting factor I find locally.

    I have to use Gemini in one specific workflow

    I would love some story on why AI is needed at all.

  • DIY experimental Redox Flow Battery kit

    Technology technology
    3
    1
    37 Stimmen
    3 Beiträge
    19 Aufrufe
    C
    The roadmap defines 3 milestone batteries. The first is released, it's a benchtop device that you can relatively easily build on your own. It has an electrode side of 2 x 2cm2. It does not store any significant amount of energy. The second one is being developed right now, it has a cell the size of a small 3d printer bed (20x20cm) and will also not store practical amounts of energy. It will hopefully prove though that they are on the right track and that they can scale it up. The third battery only will store significant amounts of energy but in only due end of the year (probably later). Current Vanadium systems cost approx. 300-600$/kWh according to some random website I found. The goal of this project is to spread the knowledge about Redox Flow Batteries and in the medium term only make them commercially viable. The aniolyth and catholyth are based on the Zink-Iodine system in an aqueous solution. There are a bunch of other systems though, each with their trade offs. The anode and cathode are both graphite felt in the case of the dev kit.
  • How to transform your Neovim to Cursor in minutes - Composio

    Technology technology
    1
    1
    4 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet
  • 255 Stimmen
    14 Beiträge
    80 Aufrufe
    S
    According to the case website, it looks like it's only people who own a device made by Google that runs their voice assistant. So, Samsung Android users are not included, but anyone with a Google Home device or a Chromecast is included
  • Matrix.org is Introducing Premium Accounts

    Technology technology
    110
    1
    225 Stimmen
    110 Beiträge
    546 Aufrufe
    F
    It's nice that this exists, but even for this I'd prefer to use an open source tool. And it of course helps with migration only if the old HS is still online.. I think most practically this migration function would be built inside some Matrix client (one that would support more than one server to start with), but I suppose a standalone tool would be a decent solution as well.
  • Bill Atkinson, Who Made Computers Easier to Use, Is Dead at 74

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • Britain’s Companies Are Being Hacked

    Technology technology
    9
    1
    21 Stimmen
    9 Beiträge
    52 Aufrufe
    D
    Is that "goodbye" in Russian? Why?
  • 13 Stimmen
    6 Beiträge
    41 Aufrufe
    rinse@lemmy.worldR
    Protocol implementation plebbit-js is separated from client like Seedit
  • The bots are among us.

    Technology technology
    3
    2
    0 Stimmen
    3 Beiträge
    26 Aufrufe
    yerbouti@sh.itjust.worksY
    Yeah she was on to something with the layers, but screw it up. I’m sure the models got better since.