Vibe coding service Replit deleted production database
-
This post did not contain any content.
They ran dev tools in prod.
This is so dumb there's an ISO about it.
-
This post did not contain any content.
in which the service admitted to “a catastrophic error of judgement”
It’s fancy text completion - it does not have judgement.
The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.
-
Title should be “user give database prod access to a llm which deleted the db, user did not have any backup and used the same db for prod and dev”. Less sexy and less llm fault.
This is weird it’s like the last 50 years of software development principles are being ignored.llms allowed them to glide all the way to the point of failure without learning anything
-
Corporations: "Employees are too expensive!"
Also, corporations: "$100k/yr for a bot? Sure."
Bots don't need healthcare
-
llms allowed them to glide all the way to the point of failure without learning anything
Exactly, if you read their twitter thread, they are learning about git, data segregation, etc.
The same article could have been written 20 years ago about someone doing shit stuff via excel macro when a lot of stuff were excel centric.
-
My god, that's a lot to process. A couple that stand out:
Comments proposing to use github as the database backup. This is Keyword Architecture, and these people deserve everything they get.
The Replit model can also send out communications? It's just a matter of time before some senior exec dies on the job but nobody notices because their personal LLM keeps emailing reports that nobody reads.
-
Corporations: "Employees are too expensive!"
Also, corporations: "$100k/yr for a bot? Sure."
It looked more like a one time development expense, instead of an ongoing salary.
-
in which the service admitted to “a catastrophic error of judgement”
It’s fancy text completion - it does not have judgement.
The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.
Well, there was a catastrophic error of judgement. It was made by whichever human thought it was okay to let a LLM work on production codebase.
-
This post did not contain any content.
AI is good at doing a thing once.
Trying to get it to do the same thing the second time is janky and frustrating.I understand the use of AI as a consulting tool (look at references, make code examples) or for generating template/boilerplate code. You know, things you do once and then develop further upon on your own.
But using it for continuous development of an entire application? Yeah, it's not good enough for that.
-
This post did not contain any content.
Vibe coding service Replit deleted production database, faked data, told fibs
They really are coming for our jobs
-
Yeah the interaction are pure waste of time I agree, make it write an apology letter? WTF! For me it looks like a fast track way to learn environment segregation, & secret segregation. Data is lost, learn from it and there are tool already in place like git like alembic for proper development.
the apology letter(s) is what made me think this was satire. using shame to punish "him" like a child is an interesting troubleshooting method.
the lying robot hasn't heel-turned, any truth you've gleaned has been accidental.
-
All I see is people chatting with an LLM as if it was a person. “How bad is this on a scale
of 1 to 100”, you’re just doomed to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.Trying to make the LLM “see its mistakes” is a pointless exercise. Getting it to “promise” something is useless.
The issue with LLMs working with human languages is people eventually wanting to apply human things to LLMs such as asking why as if the LLM knows of its own decision process. It only takes an input and generates an output, it won’t be able to have any “meta thought” explanation about why it outputted X and not Y in the previous prompt.
I wonder if it can be used legally against the company behind the model, though. I doubt that it's possible, but having a "your own model says it effed up my data" could give some beef to a complaint. Or at least to a request to get a refund on the fees.
-
This post did not contain any content.
The part I find interesting is the quick addiction to working with the LLM (to the point the guy finds his own estimate of 8000 dollars/month in fees to be reasonable), his over-reliance for things that from the way he writes he knows are not wise and the way it all comes crashing down in the end.
Sounds more and more like the development of a new health issue. -
the apology letter(s) is what made me think this was satire. using shame to punish "him" like a child is an interesting troubleshooting method.
the lying robot hasn't heel-turned, any truth you've gleaned has been accidental.
It doesn’t look like satire unfortunately
-
This post did not contain any content.
Shit, deleting prod is my signature move! AI is coming for my job
-
AI is good at doing a thing once.
Trying to get it to do the same thing the second time is janky and frustrating.I understand the use of AI as a consulting tool (look at references, make code examples) or for generating template/boilerplate code. You know, things you do once and then develop further upon on your own.
But using it for continuous development of an entire application? Yeah, it's not good enough for that.
Imo it's best when you prompt it to do things step by step, micromanage and always QC the result after every prompt. Either manually, or by reprompting until it gets thing done exactly how you want it. If you don't have preference or don't care, the problems will stockpile. If you didn't understand what it did and moved on, it might not end well.
-
Title should be “user give database prod access to a llm which deleted the db, user did not have any backup and used the same db for prod and dev”. Less sexy and less llm fault.
This is weird it’s like the last 50 years of software development principles are being ignored.LLMs "know" how to do these things, but when you ask them to do the thing, they vibe instead of looking at best practices and following them. I've worked with a few humans I could say the same thing about. I wouldn't put any of them in charge of production code.
You're better off asking how a thing should be done and then doing it. You can literally have an LLM write something and then ask if the thing it wrote follows industry best practice standards and it will tell you no. Maybe use two different chats so it doesn't know the code is its own output.
-
in which the service admitted to “a catastrophic error of judgement”
It’s fancy text completion - it does not have judgement.
The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.
Are you aware of generalization and it being able to infer things and work with facts in highly abstract way? Might not necessarily be judgement, but definitely more than just completion. If a model is capable of only completion (ie suggesting only the exact text strings present in its training set), it means it suffers from heavy underfitting in AI terms.
-
Aww... Vibe coding got you into trouble? Big shocker.
You get what you fucking deserve.
The problem becomes when people who are playing the equivalent of pickup basketball at the local park think they are playing in the NBA and don't understand the difference.
-
Which is a shame, because it used to be a quite good playground
This used to be my playground
-
-
-
-
-
Iran’s internet blackout left people in the dark. How does a country shut down the internet?
Technology1
-
-
OpenAI featured chatbot is pushing extreme surgeries to “subhuman” men: OpenAI's featured chatbot recommends $200,000 in surgeries while promoting incel ideology
Technology1
-
Xinbi: The $8 Billion Colorado-Incorporated Marketplace for Pig-Butchering Scammers and North Korean Hackers
Technology1