Skip to content

Ai Code Commits

Technology
37 27 546
  • Looking forward to the new era of security holes caused by this

    You'd have to be an idiot to merge anything from an AI without going through it line by line. Which really is the problem with AI, it's mostly fine if you keep an eye on it but the fact you have to keep an eye on it kind of renders the whole thing pointless.

    It's like self-driving cars, if I have to keep an eye on it to make sure it won't randomly crash into a tree I might as well drive the damn thing myself.

  • They're also selling self-driving cars... the question is: when will the self driving cars kill fewer people per passenger-mile than average human drivers?

    There's more to it than that, there's also the cost of implementation.

    If a self-driving car killed on average one less human than your average human does, but costs $100,000 to install in the car, then it still isn't worth implementing.

    Yes I know that puts a price on human life but that is how economics works.

  • Link Preview Image

    "AI-answers will be banned." not enough?

  • This feels like an attempt to destroy open source projects. Overwhelm developers with crap PRs so they can't fix real issues.

    It won't work long term, because I can't imagine anyone staying on GitHub after it gets bad.

    destroy open source projects

    I do believe that too. The AIs are stealing all the code and remove the licenses, and the OSI recently classified "binary blobs" as open-source. LLM companies need fresh content and will try anything to steal that.

  • This was already possible without machine learning

    It's automated and exponentially worse. It's not the same thing.

  • I see some problems here.

    An LLM providing "an opinion" is not a thing, as far as current tech does. It's just statistically right or wrong, and put that into word, which does not fit nicely with real use cases.
    Also, lots of tools already have autofix that can (on demand) handle many minor issues you mention, without any LLM. Assuming static analysis is already in place and decent tooling is used, this would not have to reach either a human or an AI agent or anything before getting fixed with little resources.

    As anecdotal evidence, we regularly look into those tools on the job. Granted, we don't have billions of lines of code to check, but so far it's at best useless. Another anecdotal evidence is the recent outburst from the curl project (and other, following suite) getting a mountain of issues that are bogus.

    I have no doubt that there is a place for human-sounding review and advice, alongside other more common uses like completion and documentation, but ultimately these systems are not able to think by design. The work still has to be done. And can't go much beyond platitudes. You ask how common the horrible cases are, but that might not be the correct question. Horrific comments are easy to spot and filter out. Perfectly decent looking "minor fixes" that are well worded, follow guidelines, and pass all checks, while introducing an off by one error or suddenly decides to swap two parameters that happens to be compatible and make sense in context are the issue. And those, even if rare (empirically I'd say they are not that rare for now) are so much harder to spot without full human analysis, are a real threat.

    Yet another anecdotal… yes, that's a lot. Given the current hype, I can only base my findings on personal experience, mostly. I use AI-based code completion, assuming it's short enough to check at a glance, and the context is small enough that it can't make mistakes. At most two-three lines at time. Even in this context, while checking that the generated code matches what I was going to write, I've seen a handful of mistakes slip through over a few months. It makes me dread what could get through a PR system, where the codebase is not necessarily fresh in the mind of the reviewer.

    This is not to say that none of that is useful, but if it were to be, it would require extremely high level of trust, far higher than current human intervention (which is also not great and source of mistakes, I'm very aware of that) to be. The goal should not be to emulate human mistakes, but to make something better.

    An LLM providing "an opinion" is not a thing

    Agreed, but can we just use the common parlance? Explaining completions every time is tedious, and most everyone talking about it at this level always knows. It doesn't think, it doesn't know anything, but it's a lot easier to use those words to mean something that seems analogous. But yeah, I've been on your side of this conversation before and let's just read all that as agreed.

    this would not have to reach either a human or an AI agent or anything before getting fixed with little resources

    There are tools that do some of this automatically. I picked really low hanging fruit that I still see every single day in multiple environments. LLMs attempt (wrong word here, I know) more, but they need review and acceptance by a human expert.

    Perfectly decent looking "minor fixes" that are well worded, follow guidelines, and pass all checks, while introducing an off by one error or suddenly decides to swap two parameters that happens to be compatible and make sense in context are the issue. And those, even if rare (empirically I'd say they are not that rare for now) are so much harder to spot without full human analysis, are a real threat.

    I get that folks are trying to fully automate this. That's fucking stupid. I don't let seasoned developers commit code to my repos without review, why would I let AI? Incidentally, seasoned developers also can suggest fixes with subtle errors. And sometimes they escape into the code base, or sometimes perfectly good code that worked fine on prem goes to shit in the cloud—I just had to argue my team into fixing something that executed over 10k SQL statements in some cases on a single page load due to lazy loading. That shit worked "great" on prem but was taking up to 90 seconds in the cloud. All written by humans.

    The goal should not be to emulate human mistakes, but to make something better.

    I'm sure that is someone's goal, but LLMs aren't going to do that. They are a different tool that helps but does not in any way replace human experts. And I'm caught in the middle of every conversation because I don't hate them enough for one side, and I'm not hype enough about them for the other. But I've been working with them for several years now and watched the grow since GPT2 and I understand them pretty well. Well enough not to trust them to the degree some idiots do, but I still find them really handy.

  • Link Preview Image

    It's already kind of happening. The Curl project is having a really bad time. No idea if the "bug" submissions are themselves automated, but the content of the filings are pure AI nonsense.

  • There's more to it than that, there's also the cost of implementation.

    If a self-driving car killed on average one less human than your average human does, but costs $100,000 to install in the car, then it still isn't worth implementing.

    Yes I know that puts a price on human life but that is how economics works.

    $100K for a safer driver might be well worth it to a lot of people, particularly if it's a one-time charge. If that $100K autopilot can serve for seven years, that's way cheaper than paying a chauffeur.

  • "AI-answers will be banned." not enough?

    Detecting the AI answers is an arms-race.

  • The "AI agent" approach's goal doesn't include a human reviewer. As in the agent is independent, or is reviewed by other AI agents. Full automation.

    They are selling those AI agents as working right now despite the obvious flaws.

    From what I know, those agents can be absolutely fantastic as long as they run under strict guidance of a senior developer who really knows how to use them. Fully autonomous agents sound like a terrible idea.

  • 290 Stimmen
    80 Beiträge
    131 Aufrufe
    dojan@pawb.socialD
    My comment is in the context of this Also interesting that we’re ignoring something here – big tech is chasing cheap sources of clean energy. Don’t we want cheap, clean energy?
  • Mozilla warns Germany could soon declare ad blockers illegal

    Technology technology
    136
    628 Stimmen
    136 Beiträge
    729 Aufrufe
    A
    I agree, you're right. I'm kind of sick of this firehose of bullshit raining down from on high. Freedom is dying in so many different contexts all at the same time.
  • Using the video queue feature on YouTube mobile requires premium

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • BSOD is dead, long live BSOD

    Technology technology
    14
    1
    56 Stimmen
    14 Beiträge
    146 Aufrufe
    S
    Right? I never click these useless links.
  • Firefox 140 Brings Tab Unload, Custom Search & New ESR

    Technology technology
    41
    1
    234 Stimmen
    41 Beiträge
    599 Aufrufe
    S
    Read again. I quoted something along the lines of "just as much a development decision as a marketing one" and I said, it wasn't a development decision, so what's left? Firefox released just as frequently before, just that they didn’t increase the major version that often. This does not appear to be true. Why don't you take a look at the version history instead of some marketing blog post? https://www.mozilla.org/en-US/firefox/releases/ Version 2 had 20 releases within 730 days, averaging one release every 36.5 days. Version 3 had 19 releases within 622 days, averaging 32.7 days per release. But these releases were unscheduled, so they were released when they were done. Now they are on a fixed 90-day schedule, no matter if anything worthwhile was complete or not, plus hotfix releases whenever they are necessary. That's not faster, but instead scheduled, and also they are incrementing the major version even if no major change was included. That's what the blog post was alluding to. In the before times, a major version number increase indicated major changes. Now it doesn't anymore, which means sysadmins still need to consider each release a major release, even if it doesn't contain major changes because it might contain them and the version name doesn't say anything about whether it does or not. It's nothing but a marketing change, moving from "version numbering means something" to "big number go up".
  • Album 'Hysteria' Out Now

    Technology technology
    1
    1
    1 Stimmen
    1 Beiträge
    20 Aufrufe
    Niemand hat geantwortet
  • 51 Stimmen
    13 Beiträge
    139 Aufrufe
    jimmydoreisalefty@lemmy.worldJ
    It is a possibility. Thanks for the input!
  • 1 Stimmen
    8 Beiträge
    82 Aufrufe
    L
    I think the principle could be applied to scan outside of the machine. It is making requests to 127.0.0.1:{port} - effectively using your computer as a "server" in a sort of reverse-SSRF attack. There's no reason it can't make requests to 10.10.10.1:{port} as well. Of course you'd need to guess the netmask of the network address range first, but this isn't that hard. In fact, if you consider that at least as far as the desktop site goes, most people will be browsing the web behind a standard consumer router left on defaults where it will be the first device in the DHCP range (e.g. 192.168.0.1 or 10.10.10.1), which tends to have a web UI on the LAN interface (port 8080, 80 or 443), then you'd only realistically need to scan a few addresses to determine the network address range. If you want to keep noise even lower, using just 192.168.0.1:80 and 192.168.1.1:80 I'd wager would cover 99% of consumer routers. From there you could assume that it's a /24 netmask and scan IPs to your heart's content. You could do top 10 most common ports type scans and go in-depth on anything you get a result on. I haven't tested this, but I don't see why it wouldn't work, when I was testing 13ft.io - a self-hosted 12ft.io paywall remover, an SSRF flaw like this absolutely let you perform any network request to any LAN address in range.