Vibe coding service Replit deleted production database
-
the apology letter(s) is what made me think this was satire. using shame to punish "him" like a child is an interesting troubleshooting method.
the lying robot hasn't heel-turned, any truth you've gleaned has been accidental.
It doesn’t look like satire unfortunately
-
This post did not contain any content.
Shit, deleting prod is my signature move! AI is coming for my job
-
AI is good at doing a thing once.
Trying to get it to do the same thing the second time is janky and frustrating.I understand the use of AI as a consulting tool (look at references, make code examples) or for generating template/boilerplate code. You know, things you do once and then develop further upon on your own.
But using it for continuous development of an entire application? Yeah, it's not good enough for that.
Imo it's best when you prompt it to do things step by step, micromanage and always QC the result after every prompt. Either manually, or by reprompting until it gets thing done exactly how you want it. If you don't have preference or don't care, the problems will stockpile. If you didn't understand what it did and moved on, it might not end well.
-
Title should be “user give database prod access to a llm which deleted the db, user did not have any backup and used the same db for prod and dev”. Less sexy and less llm fault.
This is weird it’s like the last 50 years of software development principles are being ignored.LLMs "know" how to do these things, but when you ask them to do the thing, they vibe instead of looking at best practices and following them. I've worked with a few humans I could say the same thing about. I wouldn't put any of them in charge of production code.
You're better off asking how a thing should be done and then doing it. You can literally have an LLM write something and then ask if the thing it wrote follows industry best practice standards and it will tell you no. Maybe use two different chats so it doesn't know the code is its own output.
-
in which the service admitted to “a catastrophic error of judgement”
It’s fancy text completion - it does not have judgement.
The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.
Are you aware of generalization and it being able to infer things and work with facts in highly abstract way? Might not necessarily be judgement, but definitely more than just completion. If a model is capable of only completion (ie suggesting only the exact text strings present in its training set), it means it suffers from heavy underfitting in AI terms.
-
Aww... Vibe coding got you into trouble? Big shocker.
You get what you fucking deserve.
The problem becomes when people who are playing the equivalent of pickup basketball at the local park think they are playing in the NBA and don't understand the difference.
-
Which is a shame, because it used to be a quite good playground
This used to be my playground
-
AI is good at doing a thing once.
Trying to get it to do the same thing the second time is janky and frustrating.I understand the use of AI as a consulting tool (look at references, make code examples) or for generating template/boilerplate code. You know, things you do once and then develop further upon on your own.
But using it for continuous development of an entire application? Yeah, it's not good enough for that.
If it had the same seed it would do the same thing. But you can’t control that with most
-
This post did not contain any content.
And nothing of value was lost.
-
Are you aware of generalization and it being able to infer things and work with facts in highly abstract way? Might not necessarily be judgement, but definitely more than just completion. If a model is capable of only completion (ie suggesting only the exact text strings present in its training set), it means it suffers from heavy underfitting in AI terms.
Completion is not the same as only returning the exact strings in its training set.
LLMs don't really seem to display true inference or abstract thought, even when it seems that way. A recent Apple paper demonstrated this quite clearly.
-
Vibe coding service Replit deleted production database, faked data, told fibs
They really are coming for our jobs
I'm okay with it deleting production databases, even faking data but telling fibs is something only humans should be able to do.
-
Completion is not the same as only returning the exact strings in its training set.
LLMs don't really seem to display true inference or abstract thought, even when it seems that way. A recent Apple paper demonstrated this quite clearly.
Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn't really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it's able to solve more complex problems and perform more complex pattern matching.
-
This post did not contain any content.
If an LLM can delete your production database, it should
-
There's a lot of other expenses with an employee (like payroll taxes, benefits, retirement plans, health plan if they're in the USA, etc), but you could find a self-employed freelancer for example.
Or just get an employee anyways because you'll still likely have a positive ROI. A good developer will take your abstract list of vague requirements and produce something useful and maintainable.
These comparisons assume equal capability, which I find troubling.
Like, a person who doesn't understand singing nor are able to learn it can not perform adequately in a musical. It doesn't matter if they are cheaper.
-
This post did not contain any content.
Replit was pretty useful before vibe coding. How the mighty have fallen.
-
Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn't really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it's able to solve more complex problems and perform more complex pattern matching.
The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time
You mean like a calculator does?
-
The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time
You mean like a calculator does?
Yeah, this is correct analogy, but much more complex problems than calculator. How much it is similar or not to humans way of thinking is completely irrelevant. And how much exact human type of thinking is necessary for any kind of problem solving or work is not something that we can really calculate. Considering that scientific breakthroughs, engineering innovations, medical stuff, complex math problems, programming, etc, do necessarily need human thinking or benefit from it as opposed to super advanced statistical meta-patterning calculator is wishful thinking. It is not based on any real knowledge we have. If you think it is wrong to give it our problems to solve, to give it our work, then it's a very understandable argument, but you should say exactly that. Instead this AI-hate hivemind tries to downplay it using dismissive braindead generic phrases like "NoPe ItS nOt ReAlLy UnDeRsTaNdInG aNyThInG". Okay, who tf asked? It solves the problem. People keep using it and become overpowered because of it. What is the benefit of trying to downplay its power like that? You're not really fighting it this way if you wanted to fight it.
-
This post did not contain any content.
I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.
Well then, that settles it, this should never have happened.
I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.
That goes for math, coding, health advice, etc.
If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.
-
There's a lot of other expenses with an employee (like payroll taxes, benefits, retirement plans, health plan if they're in the USA, etc), but you could find a self-employed freelancer for example.
Or just get an employee anyways because you'll still likely have a positive ROI. A good developer will take your abstract list of vague requirements and produce something useful and maintainable.
They could hire on a contractor and eschew all those costs.
I’ve done contract work before, this seems a good fit (defined problem plus budget, unknown timeline, clear requirements)
-
This post did not contain any content.
Not mad about an estimated usage bill of $8k per month.
Just hire a developer