Skip to content

Vibe coding service Replit deleted production database

Technology
118 73 111
  • They could hire on a contractor and eschew all those costs.

    I’ve done contract work before, this seems a good fit (defined problem plus budget, unknown timeline, clear requirements)

    That's what I meant by hiring a self-employed freelancer. I don't know a lot about contracting so maybe I used the wrong phrase.

  • I'm not the person you're replying to but the one thing I've found them helpful for is targeted search.

    I can ask it a question and then access its sources from whatever response it generates to read and review myself.

    Kind of a simpler, free LexisNexis.

    One built a bunch of local search tools with MCP and that’s where I get a lot of my value out of it

    RAG workflows are incredibly useful and with modern agents and tool calls work very well.

    They kind of went out of style but it’s a perfect use case.

  • The tool isn’t returning all code, but it is sending code.

    I had discussions with my CTO and security team before integrating Claude code.

    I have to use Gemini in one specific workflow and Gemini had a lot of landlines for how they use your data. Anthropic was easier to understand.

    Anthropic also has some guidance for running Claude Code in a container with firewall and your specified dev tools, it works but that’s not my area of expertise.

    The container doesn’t solve all the issues like using remote servers, but it does let you restrict what files and network requests Claude can access (so e.g. Claude can’t read your env vars or ssh key files).

    I do try local LLMs but they’re not there yet on my machine for most use cases. Gemma 3n is decent if you need small model performance and tool calls, phi4 works but isn’t thinking (the thinking variants are awful), and I’m exploring dream coder and diffusion models. R1 is still one of the best local models but frequently overthinks, even the new release. Context window is the largest limiting factor I find locally.

    I have to use Gemini in one specific workflow

    I would love some story on why AI is needed at all.

  • All I see is people chatting with an LLM as if it was a person. “How bad is this on a scale
    of 1 to 100”, you’re just doomed to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.

    Trying to make the LLM “see its mistakes” is a pointless exercise. Getting it to “promise” something is useless.

    The issue with LLMs working with human languages is people eventually wanting to apply human things to LLMs such as asking why as if the LLM knows of its own decision process. It only takes an input and generates an output, it won’t be able to have any “meta thought” explanation about why it outputted X and not Y in the previous prompt.

    How bad is this on a scale of sad emoji to eggplant emoji.

    Children are replacing us, it's terrifying.

  • What are they helpful tools for then? A study showed that they make experienced developers 19% slower.

    ok so, i have large reservations with how LLM’s are used. but when used correctly they can be helpful. but where and how?

    if you were to use it as a tutor, the same way you would ask a friend what a segment of code does, it will break down the code and tell you. and it will get as nity grity, and elementary school level as you weir wish without judgement, and i in what ever manner you prefer, it will recommend best practices, and will tell you why your code may not work with the understanding that it does not have the knowledge of the project you are working on. (it’s not going to know the name of the function you are trying to load, but it will recommend checking for that in trouble shooting).

    it can rtfm and give you the parts you need for any thing with available documentation, and it will link to it so you can verify it, wich you should do often, just like you were taught to do with wikipedia articles.

    if you ask i it for code, prepare to go through each line like a worksheet from high school to point out all the problems, wile good exercise for a practicle case, being the task you are on, it would be far better to write it yourself because you should know the particulars and scope.

    also it will format your code and provide informational comments if you can’t be bothered, though it will be generic.

    again, treat it correctly for its scope, not what it’s sold as by charletons.

  • I have to use Gemini in one specific workflow

    I would love some story on why AI is needed at all.

    Batch process turning unstructured free form text data into structured outputs.

    As a crappy example imagine if you wanted to download metadata about your albums but they’re all labelled “Various Artists”. You can use an LLM call to read the album description and fix the track artists for the tracks, now you can properly organize your collection.

    I’m using the same idea, different domain and a complex set of inputs.

    It can be much more cost effective than manually spending days tagging data and writing custom importers.

    You can definitely go lighter than LLMs. You can use gensim to do category matching, you can use sentence transformers and nearest neighbours (this is basically what Semantle does), but LLM performed the best on more complex document input.

  • It seemed like the llm had decided it was in a brat scene and was trying to call down the thunder.

    Oops I dweted evewyfing 🥺

  • This post did not contain any content.

    Replit sucks

  • This only proves some of them can't solve all complex problems. I'm only claiming some of them can solve some complex problems. Not only by remembering exact solutions, but by remembering steps and actions used in building those solutions, generalizing, and transferring them to new problems. Anyone who tries using it for programming, will discover this very fast.

    PS: Some of them were already used to solve problems and find patterns in data humans weren't able to get other ways before (particle research in CERN, bioinformatics, etc).

    You're referring to more generic machine learning, not LLMs. These are vastly different technologies.

    And I have used them for programming, I know their limitations. They don't really transfer solutions to new problems, not on their own anyway. It usually requires pretty specific prompting. They can at best apply solutions to problems, but even then it's not a truly generalised thing, even if it seems to work for many cases.

    That's the trap you're falling into as well; LLMs look like they're doing all this stuff, because they're trained on data produced by people who actually do so. But they can't think of something truly novel. LLMs are mathematically unable to truly generalize, it would prove P=NP if they did (there was a paper from a researcher in IIRC Nijmegen that proved this). She also proved they won't scale, and lo and behold LLM performance is plateauing hard (except in very synthetic, artificial benchmarks designed to make LLMs look good).

  • The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

    Even after he used "ALL CAPS"?!? Impossible!

  • Batch process turning unstructured free form text data into structured outputs.

    As a crappy example imagine if you wanted to download metadata about your albums but they’re all labelled “Various Artists”. You can use an LLM call to read the album description and fix the track artists for the tracks, now you can properly organize your collection.

    I’m using the same idea, different domain and a complex set of inputs.

    It can be much more cost effective than manually spending days tagging data and writing custom importers.

    You can definitely go lighter than LLMs. You can use gensim to do category matching, you can use sentence transformers and nearest neighbours (this is basically what Semantle does), but LLM performed the best on more complex document input.

    That's pretty much what google says they use AI for, for structuring.

    Thanks for your insight.

  • You're referring to more generic machine learning, not LLMs. These are vastly different technologies.

    And I have used them for programming, I know their limitations. They don't really transfer solutions to new problems, not on their own anyway. It usually requires pretty specific prompting. They can at best apply solutions to problems, but even then it's not a truly generalised thing, even if it seems to work for many cases.

    That's the trap you're falling into as well; LLMs look like they're doing all this stuff, because they're trained on data produced by people who actually do so. But they can't think of something truly novel. LLMs are mathematically unable to truly generalize, it would prove P=NP if they did (there was a paper from a researcher in IIRC Nijmegen that proved this). She also proved they won't scale, and lo and behold LLM performance is plateauing hard (except in very synthetic, artificial benchmarks designed to make LLMs look good).

    They don’t really transfer solutions to new problems

    Lets say there is a binary format some old game uses (Doom), and in it some of its lumps it can store indexed images, each pixel is an index of color in palette which is stored in another lump, there's also a programming language called Rust, and a little known/used library that can look into binary data of that format, there's also a GUI library in Rust that not many people used either. Would you consider it an "ability to transfer solutions to new problems" that it was able to implement extracting image data from that binary format using the library, extracting palette data from that binary format, converting that indexed image using extracted palette into regular rgba image data, and then render that as window background using that GUI library, the only reference for which is a file with names and type signatures of functions. There's no similar Rust code in the wild at all for any of those scenarios. Most of this it was able to do from a few little prompts, maybe even from the first one. There sure were few little issues along the way that required repromting and figuring things together with it. Stuff like this with AI can take like half an hour while doing the whole thing fully manually could easily take multiple days just for the sake of figuring out APIs of libraries involved and intricacies of recoding indexed image to rgba. For me this is overpowered enough even right now, and it's likely going to improve even more in future.

  • The [AI] safety stuff is more visceral to me after a weekend of vibe hacking,” Lemkin said. I explicitly told it eleven times in ALL CAPS not to do this. I am a little worried about safety now.

    This sounds like something straight out of The Onion.

    That is also the premise of one of the stories in Asimov's I, Robot. Human operator did not say the command with enough emphasis, so the robot went did something incredibly stupid.

    Those stories did not age well... Or now I guess they did?

  • This post did not contain any content.

    So it's the LLM's fault for violating Best Practices, SOP, and Opsec that the rest of us learned about in Year One?

    Someone needs to be shown the door and ridiculed into therapy.

  • This post did not contain any content.

    His mood shifted the next day when he found Replit “was lying and being deceptive all day. It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test.”

    yeah that's what it does

  • They don’t really transfer solutions to new problems

    Lets say there is a binary format some old game uses (Doom), and in it some of its lumps it can store indexed images, each pixel is an index of color in palette which is stored in another lump, there's also a programming language called Rust, and a little known/used library that can look into binary data of that format, there's also a GUI library in Rust that not many people used either. Would you consider it an "ability to transfer solutions to new problems" that it was able to implement extracting image data from that binary format using the library, extracting palette data from that binary format, converting that indexed image using extracted palette into regular rgba image data, and then render that as window background using that GUI library, the only reference for which is a file with names and type signatures of functions. There's no similar Rust code in the wild at all for any of those scenarios. Most of this it was able to do from a few little prompts, maybe even from the first one. There sure were few little issues along the way that required repromting and figuring things together with it. Stuff like this with AI can take like half an hour while doing the whole thing fully manually could easily take multiple days just for the sake of figuring out APIs of libraries involved and intricacies of recoding indexed image to rgba. For me this is overpowered enough even right now, and it's likely going to improve even more in future.

    That's applying existing solutions to a different programming language or domain, but ultimately every single technique used already exists. It only applied what it knew, it did not come up with something new. The problem as stated is also not really "new" either, image extraction, conversion and rendering isn't exactly a "new problem".

    I'm not disputing that LLMs can speed up some work, I know it occasionally does so for me as well. But what you have to understand is that the LLM only remembered similar problems and their solutions, it did not at any point invent something truly new. I understand the distinction is difficult to make.

  • This post did not contain any content.

    Headling should say, "Incompetent project managers fuck up by not controlling production database access. Oh well."

  • That's applying existing solutions to a different programming language or domain, but ultimately every single technique used already exists. It only applied what it knew, it did not come up with something new. The problem as stated is also not really "new" either, image extraction, conversion and rendering isn't exactly a "new problem".

    I'm not disputing that LLMs can speed up some work, I know it occasionally does so for me as well. But what you have to understand is that the LLM only remembered similar problems and their solutions, it did not at any point invent something truly new. I understand the distinction is difficult to make.

    I understand what you're having in mind, I've had similar intuitions about AI in early 2000s.
    What exactly is "truly new" is an interesting topic ofc, but it's a separate topic.
    Nowadays I'm trying to look at things more empyrically, without projecting my internal intuitions on everything.
    In practice it does generalize knowledge, use many forms of abstract reasoning and transfer knowledge across different domains.
    And it can do coding way beyond the level of complexity of what average software developer does at everyday work.

  • This post did not contain any content.

    Replit is a vibe coding service now? Swear it just used to be a place to write code in projects

  • Oops I dweted evewyfing 🥺

    I knew it would make you mad but I did it anyway.

    I don't think you have the guts to do anything about it either, vibe coder.

  • 128 Stimmen
    8 Beiträge
    72 Aufrufe
    R
    In propaganda, the main thing is not to confuse militants with insurgents.
  • 198 Stimmen
    30 Beiträge
    198 Aufrufe
    D
    This guy gets it. And from my professional experience, Gen Z sucks at separating the two.
  • 528 Stimmen
    123 Beiträge
    663 Aufrufe
    B
    I'm not saying to waste space... but when manufacturers start a pissing match among themselves and say that it's because it's what the customers want, we end up with shit. Why does anyone need a screen that curves around the edge of the phone? What purpose does this serve? Who actually asked for this? I would give up some of my screen area to have forward facing speakers. I want a thicker phone that has better battery life. I also want to be able to swap out my battery. Oh, and I don't want the entire thing encased in glass. If we're so concerned about phone size then they should stop designing them so that a case is required.
  • Software is evolving backwards

    Technology technology
    64
    1
    341 Stimmen
    64 Beiträge
    409 Aufrufe
    M
    Came here looking for this
  • 103 Stimmen
    22 Beiträge
    120 Aufrufe
    T
    Nobody is ignoring these imperial invasions, genocides, etc. USA is actively supporting them.
  • Science and Technology News and Commentary: Aardvark Daily

    Technology technology
    2
    7 Stimmen
    2 Beiträge
    22 Aufrufe
    I
    What are you on about with this? Last news post 2013?
  • Why Japan's animation industry has embraced AI

    Technology technology
    12
    1
    1 Stimmen
    12 Beiträge
    65 Aufrufe
    R
    The genre itself has become neutered, too. A lot of anime series have the usual "anime elements" and a couple custom ideas. And similar style, too glossy for my taste. OK, what I think is old and boring libertarian stuff, I'll still spell it out. The reason people are having such problems is because groups and businesses are de facto legally enshrined in their fields, it's almost like feudal Europe's system of privileges and treaties. At some point I thought this is good, I hope no evil god decided to fulfill my wish. There's no movement, and a faction (like Disney with Star Wars) that buys a place (a brand) can make any garbage, and people will still try to find the depth in it and justify it (that complaint has been made about Star Wars prequels, but no, they are full of garbage AND have consistent arcs, goals and ideas, which is why they revitalized the Expanded Universe for almost a decade, despite Lucas-<companies> having sort of an internal social collapse in year 2005 right after Revenge of the Sith being premiered ; I love the prequels, despite all the pretense and cringe, but their verbal parts are almost fillers, their cinematographic language and matching music are flawless, the dialogue just disrupts it all while not adding much, - I think Lucas should have been more decisive, a bit like Tartakovsky with the Clone Wars cartoon, just more serious, because non-verbal doesn't equal stupid). OK, my thought wandered away. Why were the legal means they use to keep such positions created? To make the economy nicer to the majority, to writers, to actors, to producers. Do they still fulfill that role? When keeping monopolies, even producing garbage or, lately, AI slop, - no. Do we know a solution? Not yet, because pressing for deregulation means the opponent doing a judo movement and using that energy for deregulating the way everything becomes worse. Is that solution in minimizing and rebuilding the system? I believe still yes, nothing is perfect, so everything should be easy to quickly replace, because errors and mistakes plaguing future generations will inevitably continue to be made. The laws of the 60s were simple enough for that in most countries. The current laws are not. So the general direction to be taken is still libertarian. Is this text useful? Of course not. I just think that in the feudal Europe metaphor I'd want to be a Hussite or a Cossack or at worst a Venetian trader.
  • 0 Stimmen
    4 Beiträge
    32 Aufrufe
    V
    they should be required to be designed to be safe in the way people actually use them, as opposed to this hypothetical driver who has duct taped thier hands to the wheel...