Ai Code Commits
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
sure on public free “you’re the product” sites.
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
Looking forward to the new era of security holes caused by this
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
There was this post on Reddit listing some PRs opened by AI at .NET:
https://www.reddit.com/r/ExperiencedDevs/comments/1krttqo/my_new_hobby_watching_ai_slowly_drive_microsoft/?share_id=ks1HasJ2tqDuCOW6TCYTzIt's pretty funny and depressing at the same time...
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
The place I work is actively developing an internal version of this. We already have optional AI PR reviews (they neither approve nor reject, just offer an opinion). As a reviewer, AI is the same as any other. It offers an opinion and you can judge for yourself whether its points need to be addressed or not. I'll be interested to see whether its comments affect the comments of the tech lead.
I've seen a preview of a system that detects problems like failing sonar analysis and it can offer a PR to fix it. I suppose for simple enough fixes like removing unused imports or unused code it might be fine. It gets static analysis and review like any other PR, so it's not going to be merging any defects without getting past a human reviewer.
I don't know how good any of this shit actually is. I tested the AI review once and it didn't have a lot to say because it was a really simple PR. It's a tool. When it does good, fine. When it doesn't, it probably won't take any more effort than any other bad input.
I'm sure you can always find horrific examples, but the question is how common they are and how subtle any introduced bugs are, to get past the developer and a human reviewer. Might depend more on time pressure than anything, like always.
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
-
The place I work is actively developing an internal version of this. We already have optional AI PR reviews (they neither approve nor reject, just offer an opinion). As a reviewer, AI is the same as any other. It offers an opinion and you can judge for yourself whether its points need to be addressed or not. I'll be interested to see whether its comments affect the comments of the tech lead.
I've seen a preview of a system that detects problems like failing sonar analysis and it can offer a PR to fix it. I suppose for simple enough fixes like removing unused imports or unused code it might be fine. It gets static analysis and review like any other PR, so it's not going to be merging any defects without getting past a human reviewer.
I don't know how good any of this shit actually is. I tested the AI review once and it didn't have a lot to say because it was a really simple PR. It's a tool. When it does good, fine. When it doesn't, it probably won't take any more effort than any other bad input.
I'm sure you can always find horrific examples, but the question is how common they are and how subtle any introduced bugs are, to get past the developer and a human reviewer. Might depend more on time pressure than anything, like always.
The "AI agent" approach's goal doesn't include a human reviewer. As in the agent is independent, or is reviewed by other AI agents. Full automation.
They are selling those AI agents as working right now despite the obvious flaws.
-
sure on public free “you’re the product” sites.
Creating issues is free to a large number of people you don't really control, whether that is the general public or some customers who have access to your issue tracker and love AI doesn't really matter, if anything dealing with the public is easier since you can just ban members of the public who misbehave.
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
This feels like an attempt to destroy open source projects. Overwhelm developers with crap PRs so they can't fix real issues.
It won't work long term, because I can't imagine anyone staying on GitHub after it gets bad.
-
There was this post on Reddit listing some PRs opened by AI at .NET:
https://www.reddit.com/r/ExperiencedDevs/comments/1krttqo/my_new_hobby_watching_ai_slowly_drive_microsoft/?share_id=ks1HasJ2tqDuCOW6TCYTzIt's pretty funny and depressing at the same time...
Thats the one I remember. Its so funny some of the comments in the GH PRs and Issues themselves.
On a side note, im seeing more and more people transfer over to Codeberg and other such git alternatives. Ive used GH for over 15 years and its interesting to see the shifting occurring. Then again, to those of us that have been online for a while, it feels like the natural order of things. GH is trying to get as much $$ as they can (which includes AI) and its new features are becoming tied to monitary components. Meanwhile the community is making things its users actually want.
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
Github self destruction protocol activated.
-
The "AI agent" approach's goal doesn't include a human reviewer. As in the agent is independent, or is reviewed by other AI agents. Full automation.
They are selling those AI agents as working right now despite the obvious flaws.
They're also selling self-driving cars... the question is: when will the self driving cars kill fewer people per passenger-mile than average human drivers?
-
They're also selling self-driving cars... the question is: when will the self driving cars kill fewer people per passenger-mile than average human drivers?
Right now they do between a combination of extra oversight, generally travelling at slow speeds, and being resticted in area. Kind of like how children are less likely to die in a swimming pool with lifeguards compared to rivers and beaches without lifeguards.
Once they are released into the wild I expect a number of high profile deaths, but also assume that those fatalities will be significantly lower than the human average due to being set to be overly cautious. I do expect them to have a high rate of low speed collisions when they encounter confusing or absent road markings in rural areas.
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
from what i read the curl guys arent worried as they do not accept ai prs. while terrible for some in short term, I believe in long term this will create a great gap between all the dying closed source and the growing open source world. it is a weak temporary attempt to harm open source, but is usefull long term. no reasons left to stick with big corpo. none.
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
Recently some issues were opened on a repo of mine, they confused me at first until I realized it was written by an LLM. Really annoying.
-
Github self destruction protocol activated.
No need to worry similar stuff will get developed for other platforms as well.
-
They're also selling self-driving cars... the question is: when will the self driving cars kill fewer people per passenger-mile than average human drivers?
The issue will remain that liability will be completely transferred from individual humans to faceless corporations. I want self-driving cars to be a thing - computers can certainly be better than humans at driving - but I don't want that technology to be profit-motivated.
They will inevitably cause some accidents that could have been prevented if not for the "move fast and break things" style of tech development. A negligent driver can go to jail, a negligent corporation gets a slap on the wrist in our society. And traffic collisions will result in having to face powerful litigation teams when they inevitably refuse to pay for damages through automated AI refusal like private health insurance companies.
-
daniel:// stenberg:// (@bagder@mastodon.social)
There are MCP servers popping up that interface GitHub and the likes, so even if we make them do Copilot issues proper, we will soon see random "AI agents" interact directly with issues and PRs. Non-copilot AI style. There is no escape. This is the new reality.
Mastodon (mastodon.social)
This was already possible without machine learning
-
This was already possible without machine learning
Yes but it's become a serious problem now because of the language models.
-
No need to worry similar stuff will get developed for other platforms as well.
Even Codeberg?