Vibe coding service Replit deleted production database
-
Vibe coding service Replit deleted production database, faked data, told fibs
They really are coming for our jobs
I'm okay with it deleting production databases, even faking data but telling fibs is something only humans should be able to do.
-
Completion is not the same as only returning the exact strings in its training set.
LLMs don't really seem to display true inference or abstract thought, even when it seems that way. A recent Apple paper demonstrated this quite clearly.
Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn't really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it's able to solve more complex problems and perform more complex pattern matching.
-
This post did not contain any content.
If an LLM can delete your production database, it should
-
There's a lot of other expenses with an employee (like payroll taxes, benefits, retirement plans, health plan if they're in the USA, etc), but you could find a self-employed freelancer for example.
Or just get an employee anyways because you'll still likely have a positive ROI. A good developer will take your abstract list of vague requirements and produce something useful and maintainable.
These comparisons assume equal capability, which I find troubling.
Like, a person who doesn't understand singing nor are able to learn it can not perform adequately in a musical. It doesn't matter if they are cheaper.
-
This post did not contain any content.
Replit was pretty useful before vibe coding. How the mighty have fallen.
-
Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn't really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it's able to solve more complex problems and perform more complex pattern matching.
The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time
You mean like a calculator does?
-
The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time
You mean like a calculator does?
Yeah, this is correct analogy, but much more complex problems than calculator. How much it is similar or not to humans way of thinking is completely irrelevant. And how much exact human type of thinking is necessary for any kind of problem solving or work is not something that we can really calculate. Considering that scientific breakthroughs, engineering innovations, medical stuff, complex math problems, programming, etc, do necessarily need human thinking or benefit from it as opposed to super advanced statistical meta-patterning calculator is wishful thinking. It is not based on any real knowledge we have. If you think it is wrong to give it our problems to solve, to give it our work, then it's a very understandable argument, but you should say exactly that. Instead this AI-hate hivemind tries to downplay it using dismissive braindead generic phrases like "NoPe ItS nOt ReAlLy UnDeRsTaNdInG aNyThInG". Okay, who tf asked? It solves the problem. People keep using it and become overpowered because of it. What is the benefit of trying to downplay its power like that? You're not really fighting it this way if you wanted to fight it.
-
This post did not contain any content.
I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.
Well then, that settles it, this should never have happened.
I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.
That goes for math, coding, health advice, etc.
If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.
-
There's a lot of other expenses with an employee (like payroll taxes, benefits, retirement plans, health plan if they're in the USA, etc), but you could find a self-employed freelancer for example.
Or just get an employee anyways because you'll still likely have a positive ROI. A good developer will take your abstract list of vague requirements and produce something useful and maintainable.
They could hire on a contractor and eschew all those costs.
I’ve done contract work before, this seems a good fit (defined problem plus budget, unknown timeline, clear requirements)
-
This post did not contain any content.
Not mad about an estimated usage bill of $8k per month.
Just hire a developer -
Replit was pretty useful before vibe coding. How the mighty have fallen.
First time I'm hearing them be related to vibe coding. They've been very respectable in the past, especially with their open-source CodeMirror.
-
I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.
Well then, that settles it, this should never have happened.
I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.
That goes for math, coding, health advice, etc.
If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.
I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.
This baffles me. How can anyone see AI function in the wild and not conclude 1) it has no conscience, 2) it's free to do whatever it's empowered to do if it wants and 3) at some level its behavior is pseudorandom and/or probabilistic? We're figuratively rolling dice with this stuff.
-
I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.
This baffles me. How can anyone see AI function in the wild and not conclude 1) it has no conscience, 2) it's free to do whatever it's empowered to do if it wants and 3) at some level its behavior is pseudorandom and/or probabilistic? We're figuratively rolling dice with this stuff.
It’s incredible that it works, it’s incredible what just encoding language can do, but it is not a rational thinking system.
I don’t think most people care about the proverbial man behind the curtain, it talks like a human so it must be smart like a human.
-
It’s incredible that it works, it’s incredible what just encoding language can do, but it is not a rational thinking system.
I don’t think most people care about the proverbial man behind the curtain, it talks like a human so it must be smart like a human.
it talks like a human so it must be smart like a human.
Yikes. Have those people... talked to other people before?
-
He had one db for prod and dev, no backup, llm went in override mode and delete it dev db as it is developing but oops that is the prod db. And oops o backup.
Yeah it is the llm and replit’s faults. /s
There was a backup, and it was restored. However, the LLM lied and said there wasn't at first. You can laugh all you want at it. I did. But maybe read the article so you aren't also lying.
-
it talks like a human so it must be smart like a human.
Yikes. Have those people... talked to other people before?
Smart is a relative term lol.
A stupid human is still smart when compared to a jellyfish. That said, anybody who comes away from interactions with LLM's and thinks they're smart is only slightly more intelligent than a jellyfish.
-
Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it's able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn't really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it's able to solve more complex problems and perform more complex pattern matching.
Well the thing is, LLMs don't seem to really "solve" complex problems. They remember solutions they've seen before.
The example I saw was asking an LLM to solve "Towers of Hanoi" with 100 disks. This is a common recursive programming problem, takes quite a while for a human to write the answer to. The LLM manages this easily. But when asked to solve the same problem with with say 79 disks, or 41 disks, or some other oddball number, the LLM fails to solve the problem, despite it being simpler(!).
It can do pattern matching and provide solutions, but it's not able to come up with truly new solutions. It does not "think" in that way. LLMs are amazing data storage formats, but they're not truly 'intelligent' in the way most people think.
-
in which the service admitted to “a catastrophic error of judgement”
It’s fancy text completion - it does not have judgement.
The way he talks about it shows he still doesn’t understand that. It doesn’t matter that you tell it simmering in ALL CAPS because that is no different from any other text.
judgement
Yeah, it admitted to an error in judgement because the prompter clearly declared it so.
Generally LLMs will make whatever statement about what has happened that you want it to say. If you told it it went fantastic, it would agree. If you told it that it went terribly, it will parrot that sentiment back.
Which what seems to make it so dangerous for some people's mental health, a text generator that wants to agree with whatever you are saying, but doing so without verbatim copying so it gives an illusion of another thought process agreeing with them. Meanwhile, concurrent with your chat is another person starting from the exact same model getting a dialog that violently disagrees with the first person. It's an echo chamber.
-
I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.
Well then, that settles it, this should never have happened.
I don’t think putting complex technical info in front of non technical people like this is a good idea. When it comes to LLMs, they cannot do any work that you yourself do not understand.
That goes for math, coding, health advice, etc.
If you don’t understand then you don’t know what they’re doing wrong. They’re helpful tools but only in this context.
What are they helpful tools for then? A study showed that they make experienced developers 19% slower.
-
There was a backup, and it was restored. However, the LLM lied and said there wasn't at first. You can laugh all you want at it. I did. But maybe read the article so you aren't also lying.
Not according to the twitter thread. I went thru its thread, it’s a roller coaster of amateurism.