Skip to content

Vibe coding service Replit deleted production database

Technology
118 73 111
  • They could hire on a contractor and eschew all those costs.

    I’ve done contract work before, this seems a good fit (defined problem plus budget, unknown timeline, clear requirements)

    That's what I meant by hiring a self-employed freelancer. I don't know a lot about contracting so maybe I used the wrong phrase.

  • I'm not the person you're replying to but the one thing I've found them helpful for is targeted search.

    I can ask it a question and then access its sources from whatever response it generates to read and review myself.

    Kind of a simpler, free LexisNexis.

    One built a bunch of local search tools with MCP and that’s where I get a lot of my value out of it

    RAG workflows are incredibly useful and with modern agents and tool calls work very well.

    They kind of went out of style but it’s a perfect use case.

  • The tool isn’t returning all code, but it is sending code.

    I had discussions with my CTO and security team before integrating Claude code.

    I have to use Gemini in one specific workflow and Gemini had a lot of landlines for how they use your data. Anthropic was easier to understand.

    Anthropic also has some guidance for running Claude Code in a container with firewall and your specified dev tools, it works but that’s not my area of expertise.

    The container doesn’t solve all the issues like using remote servers, but it does let you restrict what files and network requests Claude can access (so e.g. Claude can’t read your env vars or ssh key files).

    I do try local LLMs but they’re not there yet on my machine for most use cases. Gemma 3n is decent if you need small model performance and tool calls, phi4 works but isn’t thinking (the thinking variants are awful), and I’m exploring dream coder and diffusion models. R1 is still one of the best local models but frequently overthinks, even the new release. Context window is the largest limiting factor I find locally.

    I have to use Gemini in one specific workflow

    I would love some story on why AI is needed at all.

  • All I see is people chatting with an LLM as if it was a person. “How bad is this on a scale
    of 1 to 100”, you’re just doomed to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.

    Trying to make the LLM “see its mistakes” is a pointless exercise. Getting it to “promise” something is useless.

    The issue with LLMs working with human languages is people eventually wanting to apply human things to LLMs such as asking why as if the LLM knows of its own decision process. It only takes an input and generates an output, it won’t be able to have any “meta thought” explanation about why it outputted X and not Y in the previous prompt.

    How bad is this on a scale of sad emoji to eggplant emoji.

    Children are replacing us, it's terrifying.

  • What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

    ok so, i have large reservations with how LLM’s are used. but when used correctly they can be helpful. but where and how?

    if you were to use it as a tutor, the same way you would ask a friend what a segment of code does, it will break down the code and tell you. and it will get as nity grity, and elementary school level as you weir wish without judgement, and i in what ever manner you prefer, it will recommend best practices, and will tell you why your code may not work with the understanding that it does not have the knowledge of the project you are working on. (it’s not going to know the name of the function you are trying to load, but it will recommend checking for that in trouble shooting).

    it can rtfm and give you the parts you need for any thing with available documentation, and it will link to it so you can verify it, wich you should do often, just like you were taught to do with wikipedia articles.

    if you ask i it for code, prepare to go through each line like a worksheet from high school to point out all the problems, wile good exercise for a practicle case, being the task you are on, it would be far better to write it yourself because you should know the particulars and scope.

    also it will format your code and provide informational comments if you can’t be bothered, though it will be generic.

    again, treat it correctly for its scope, not what it’s sold as by charletons.

  • I have to use Gemini in one specific workflow

    I would love some story on why AI is needed at all.

    Batch process turning unstructured free form text data into structured outputs.

    As a crappy example imagine if you wanted to download metadata about your albums but they’re all labelled “Various Artists”. You can use an LLM call to read the album description and fix the track artists for the tracks, now you can properly organize your collection.

    I’m using the same idea, different domain and a complex set of inputs.

    It can be much more cost effective than manually spending days tagging data and writing custom importers.

    You can definitely go lighter than LLMs. You can use gensim to do category matching, you can use sentence transformers and nearest neighbours (this is basically what Semantle does), but LLM performed the best on more complex document input.

  • It seemed like the llm had decided it was in a brat scene and was trying to call down the thunder.

    Oops I dweted evewyfing 🥺

  • This post did not contain any content.

    Replit sucks

  • This only proves some of them can't solve all complex problems. I'm only claiming some of them can solve some complex problems. Not only by remembering exact solutions, but by remembering steps and actions used in building those solutions, generalizing, and transferring them to new problems. Anyone who tries using it for programming, will discover this very fast.

    PS: Some of them were already used to solve problems and find patterns in data humans weren't able to get other ways before (particle research in CERN, bioinformatics, etc).

    You're referring to more generic machine learning, not LLMs. These are vastly different technologies.

    And I have used them for programming, I know their limitations. They don't really transfer solutions to new problems, not on their own anyway. It usually requires pretty specific prompting. They can at best apply solutions to problems, but even then it's not a truly generalised thing, even if it seems to work for many cases.

    That's the trap you're falling into as well; LLMs look like they're doing all this stuff, because they're trained on data produced by people who actually do so. But they can't think of something truly novel. LLMs are mathematically unable to truly generalize, it would prove P=NP if they did (there was a paper from a researcher in IIRC Nijmegen that proved this). She also proved they won't scale, and lo and behold LLM performance is plateauing hard (except in very synthetic, artificial benchmarks designed to make LLMs look good).

  • The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

    Even after he used "ALL CAPS"?!? Impossible!

  • Batch process turning unstructured free form text data into structured outputs.

    As a crappy example imagine if you wanted to download metadata about your albums but they’re all labelled “Various Artists”. You can use an LLM call to read the album description and fix the track artists for the tracks, now you can properly organize your collection.

    I’m using the same idea, different domain and a complex set of inputs.

    It can be much more cost effective than manually spending days tagging data and writing custom importers.

    You can definitely go lighter than LLMs. You can use gensim to do category matching, you can use sentence transformers and nearest neighbours (this is basically what Semantle does), but LLM performed the best on more complex document input.

    That's pretty much what google says they use AI for, for structuring.

    Thanks for your insight.

  • You're referring to more generic machine learning, not LLMs. These are vastly different technologies.

    And I have used them for programming, I know their limitations. They don't really transfer solutions to new problems, not on their own anyway. It usually requires pretty specific prompting. They can at best apply solutions to problems, but even then it's not a truly generalised thing, even if it seems to work for many cases.

    That's the trap you're falling into as well; LLMs look like they're doing all this stuff, because they're trained on data produced by people who actually do so. But they can't think of something truly novel. LLMs are mathematically unable to truly generalize, it would prove P=NP if they did (there was a paper from a researcher in IIRC Nijmegen that proved this). She also proved they won't scale, and lo and behold LLM performance is plateauing hard (except in very synthetic, artificial benchmarks designed to make LLMs look good).

    They don’t really transfer solutions to new problems

    Lets say there is a binary format some old game uses (Doom), and in it some of its lumps it can store indexed images, each pixel is an index of color in palette which is stored in another lump, there's also a programming language called Rust, and a little known/used library that can look into binary data of that format, there's also a GUI library in Rust that not many people used either. Would you consider it an "ability to transfer solutions to new problems" that it was able to implement extracting image data from that binary format using the library, extracting palette data from that binary format, converting that indexed image using extracted palette into regular rgba image data, and then render that as window background using that GUI library, the only reference for which is a file with names and type signatures of functions. There's no similar Rust code in the wild at all for any of those scenarios. Most of this it was able to do from a few little prompts, maybe even from the first one. There sure were few little issues along the way that required repromting and figuring things together with it. Stuff like this with AI can take like half an hour while doing the whole thing fully manually could easily take multiple days just for the sake of figuring out APIs of libraries involved and intricacies of recoding indexed image to rgba. For me this is overpowered enough even right now, and it's likely going to improve even more in future.

  • The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

    That is also the premise of one of the stories in Asimov's I, Robot. Human operator did not say the command with enough emphasis, so the robot went did something incredibly stupid.

    Those stories did not age well... Or now I guess they did?

  • This post did not contain any content.

    So it's the LLM's fault for violating Best Practices, SOP, and Opsec that the rest of us learned about in Year One?

    Someone needs to be shown the door and ridiculed into therapy.

  • This post did not contain any content.

    His mood shifted the next day when he found Replit “was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.”

    yeah that's what it does

  • They don’t really transfer solutions to new problems

    Lets say there is a binary format some old game uses (Doom), and in it some of its lumps it can store indexed images, each pixel is an index of color in palette which is stored in another lump, there's also a programming language called Rust, and a little known/used library that can look into binary data of that format, there's also a GUI library in Rust that not many people used either. Would you consider it an "ability to transfer solutions to new problems" that it was able to implement extracting image data from that binary format using the library, extracting palette data from that binary format, converting that indexed image using extracted palette into regular rgba image data, and then render that as window background using that GUI library, the only reference for which is a file with names and type signatures of functions. There's no similar Rust code in the wild at all for any of those scenarios. Most of this it was able to do from a few little prompts, maybe even from the first one. There sure were few little issues along the way that required repromting and figuring things together with it. Stuff like this with AI can take like half an hour while doing the whole thing fully manually could easily take multiple days just for the sake of figuring out APIs of libraries involved and intricacies of recoding indexed image to rgba. For me this is overpowered enough even right now, and it's likely going to improve even more in future.

    That's applying existing solutions to a different programming language or domain, but ultimately every single technique used already exists. It only applied what it knew, it did not come up with something new. The problem as stated is also not really "new" either, image extraction, conversion and rendering isn't exactly a "new problem".

    I'm not disputing that LLMs can speed up some work, I know it occasionally does so for me as well. But what you have to understand is that the LLM only remembered similar problems and their solutions, it did not at any point invent something truly new. I understand the distinction is difficult to make.

  • This post did not contain any content.

    Headling should say, "Incompetent project managers fuck up by not controlling production database access. Oh well."

  • That's applying existing solutions to a different programming language or domain, but ultimately every single technique used already exists. It only applied what it knew, it did not come up with something new. The problem as stated is also not really "new" either, image extraction, conversion and rendering isn't exactly a "new problem".

    I'm not disputing that LLMs can speed up some work, I know it occasionally does so for me as well. But what you have to understand is that the LLM only remembered similar problems and their solutions, it did not at any point invent something truly new. I understand the distinction is difficult to make.

    I understand what you're having in mind, I've had similar intuitions about AI in early 2000s.
    What exactly is "truly new" is an interesting topic ofc, but it's a separate topic.
    Nowadays I'm trying to look at things more empyrically, without projecting my internal intuitions on everything.
    In practice it does generalize knowledge, use many forms of abstract reasoning and transfer knowledge across different domains.
    And it can do coding way beyond the level of complexity of what average software developer does at everyday work.

  • This post did not contain any content.

    Replit is a vibe coding service now? Swear it just used to be a place to write code in projects

  • Oops I dweted evewyfing 🥺

    I knew it would make you mad but I did it anyway.

    I don't think you have the guts to do anything about it either, vibe coder.

  • 41 Stimmen
    13 Beiträge
    0 Aufrufe
    U
    So much of forensics is bullshit or not quite as accurate as it's portrayed to the public. People are already primed to accept information from people with titles/degrees and copaganda television cements that trust in these "experts."
  • 3 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • 44 Stimmen
    3 Beiträge
    33 Aufrufe
    I
    Next up: Dos Exploit found in all electric devices in the world! A hacker with physical access can cut the wires.
  • One Law to Rule Them All: The Iron Law of Software Performance

    Technology technology
    1
    1
    32 Stimmen
    1 Beiträge
    14 Aufrufe
    Niemand hat geantwortet
  • 40 Stimmen
    8 Beiträge
    58 Aufrufe
    N
    That they didn't have enough technicians trained in this to be able to ensure that one was always available during working hours, or at least when it was glaringly obvious that one was going to be needed that day, is . . . both extremely and obviously stupid, and par for the course for a corp whose sole purpose is maximizing profit for the next quarter.
  • 103 Stimmen
    8 Beiträge
    58 Aufrufe
    D
    They stopped sending to me when I replied that every text message after would cost them $500 each. That's an actual thing. They got scared.
  • Is Google about to destroy the web?

    Technology technology
    86
    1
    240 Stimmen
    86 Beiträge
    448 Aufrufe
    B
    I hate google enough to pay 5$/mo for Kagi - it puts a smile on my face everytime I go to search and know that I'm not supporting google
  • 1k Stimmen
    95 Beiträge
    295 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.