Software developer, here.
-
Software developer, here. (No, not a "vibe coder." I actually know how to read and write my own code and what it does.)
Just had the opportunity to test GPT 5 as a coding assistant in Copilot for VS Code, which in my opinion is the only legitimately useful purpose for LLMs. (No, not to write everything for me, just to do some of the more tedious tasks faster.) The IDE itself can help keep them in line, because it detects when they screw up. Which is all the time, due to their nature. Even recent and relatively "good" models like Sonnet need constant babysitting.
GPT 5 failed spectacularly. So badly, in fact, that I'm glad I only set it to analysis tasks and not to any write tasks. I will not be using it for anything else any time soon.
-
Software developer, here. (No, not a "vibe coder." I actually know how to read and write my own code and what it does.)
Just had the opportunity to test GPT 5 as a coding assistant in Copilot for VS Code, which in my opinion is the only legitimately useful purpose for LLMs. (No, not to write everything for me, just to do some of the more tedious tasks faster.) The IDE itself can help keep them in line, because it detects when they screw up. Which is all the time, due to their nature. Even recent and relatively "good" models like Sonnet need constant babysitting.
GPT 5 failed spectacularly. So badly, in fact, that I'm glad I only set it to analysis tasks and not to any write tasks. I will not be using it for anything else any time soon.
Yeah right? I tried it yesterday to build a simple form for me. Told it to look at the structure of other forms for reference which it did and somehow it used NONE of the UI components and helpers from the other forms. It was bafflingly bad
-
Software developer, here. (No, not a "vibe coder." I actually know how to read and write my own code and what it does.)
Just had the opportunity to test GPT 5 as a coding assistant in Copilot for VS Code, which in my opinion is the only legitimately useful purpose for LLMs. (No, not to write everything for me, just to do some of the more tedious tasks faster.) The IDE itself can help keep them in line, because it detects when they screw up. Which is all the time, due to their nature. Even recent and relatively "good" models like Sonnet need constant babysitting.
GPT 5 failed spectacularly. So badly, in fact, that I'm glad I only set it to analysis tasks and not to any write tasks. I will not be using it for anything else any time soon.
Yeah, LLMs are decent with coding tasks if you know what you're doing and can properly guide it (and check it's work!), but fuck if they don't take a lot of effort to reign in. I will say they're pretty damned good at debugging the shit I wrote. I've been working on an audit project for a few months and 4o/5 have helped me a good bit to find persistent errors in my execution logic that I just kept missing on rereads and debug runs.
But new generation is painful. I had 5 generate a new function for me yesterday to do some issues recon and report generation, and I spent 20 minutes going back and forth with it dropping fields in the output repeatedly. Even on 5, it still struggles at times to not give you the same wrong answer more than once, or just waffles between wrong answers at times.
-
Yeah right? I tried it yesterday to build a simple form for me. Told it to look at the structure of other forms for reference which it did and somehow it used NONE of the UI components and helpers from the other forms. It was bafflingly bad
Despite the “official” coding score for GPT5 being higher, Claude sonnet still seems to blow it out of the water. That seems to suggest they are training to the test and the test must not be a very good test. Or they are lying.
-
Software developer, here. (No, not a "vibe coder." I actually know how to read and write my own code and what it does.)
Just had the opportunity to test GPT 5 as a coding assistant in Copilot for VS Code, which in my opinion is the only legitimately useful purpose for LLMs. (No, not to write everything for me, just to do some of the more tedious tasks faster.) The IDE itself can help keep them in line, because it detects when they screw up. Which is all the time, due to their nature. Even recent and relatively "good" models like Sonnet need constant babysitting.
GPT 5 failed spectacularly. So badly, in fact, that I'm glad I only set it to analysis tasks and not to any write tasks. I will not be using it for anything else any time soon.
I'm no longer even confident in modern LLMs to do stuff like convert a table schema or JSON document into a POCO. I tried this the other day with a field list from a table creation script. So it had to do was reformat the fields into a dumb C# model. Inexplicably it did fine except for omitting a random field in the middle of the list. Kinda shakes your confidence in LLMs for even the most basic programming tasks.
-
Despite the “official” coding score for GPT5 being higher, Claude sonnet still seems to blow it out of the water. That seems to suggest they are training to the test and the test must not be a very good test. Or they are lying.
They'd never be lying! Look at these beautiful graphs from their presentation of GPT5. They'd never!
Source: https://www.theverge.com/news/756444/openai-gpt-5-vibe-graphing-chart-crime
-
They'd never be lying! Look at these beautiful graphs from their presentation of GPT5. They'd never!
Source: https://www.theverge.com/news/756444/openai-gpt-5-vibe-graphing-chart-crime
Wut…did GPT5 evaluate itself?
-
Software developer, here. (No, not a "vibe coder." I actually know how to read and write my own code and what it does.)
Just had the opportunity to test GPT 5 as a coding assistant in Copilot for VS Code, which in my opinion is the only legitimately useful purpose for LLMs. (No, not to write everything for me, just to do some of the more tedious tasks faster.) The IDE itself can help keep them in line, because it detects when they screw up. Which is all the time, due to their nature. Even recent and relatively "good" models like Sonnet need constant babysitting.
GPT 5 failed spectacularly. So badly, in fact, that I'm glad I only set it to analysis tasks and not to any write tasks. I will not be using it for anything else any time soon.
I tried GPT-5 to write some code the other day and was quite unimpressed with how lazy it is. For every single thing, it needed nudging. I'm going back to Sonnet and Gemini. And even so, you're right. As it stands, LLMs are useful at refactoring and writing boilerplate and repetitive code, which does save time. But they're definitely shit at actually solving non-trivial problems in code and designing and planning implementation at a high level.
They're basically a better IntelliSense and automated refactoring tool, but I wouldn't trust them with proper software engineering tasks. All this vibe coding and especially agentic development bullshit people (mainly uneducated users and the AI vendors themselves) are shilling these days, I'm going nowhere near around.
I work in a professional software development team in a business that is pushing the AI coding stuff really hard. So many of my coworkers use agentic development tools routinely now to do most (if not all) of their work for them. And guess what, every other PR that goes in, random features that had been built and working are removed entirely, so then we have to do extra work to literally build things again that had been ripped out by one of these AI agents. smh
-
Software developer, here. (No, not a "vibe coder." I actually know how to read and write my own code and what it does.)
Just had the opportunity to test GPT 5 as a coding assistant in Copilot for VS Code, which in my opinion is the only legitimately useful purpose for LLMs. (No, not to write everything for me, just to do some of the more tedious tasks faster.) The IDE itself can help keep them in line, because it detects when they screw up. Which is all the time, due to their nature. Even recent and relatively "good" models like Sonnet need constant babysitting.
GPT 5 failed spectacularly. So badly, in fact, that I'm glad I only set it to analysis tasks and not to any write tasks. I will not be using it for anything else any time soon.
Have you given Qwen or GLM 4.5 a shot?
-
Wut…did GPT5 evaluate itself?
Now that we have vibe coding and all programmers have been sacked, they're apparently trying out vibe presenting and vibe graphing. Management watch out, you're obviously next!
-
Software developer, here. (No, not a "vibe coder." I actually know how to read and write my own code and what it does.)
Just had the opportunity to test GPT 5 as a coding assistant in Copilot for VS Code, which in my opinion is the only legitimately useful purpose for LLMs. (No, not to write everything for me, just to do some of the more tedious tasks faster.) The IDE itself can help keep them in line, because it detects when they screw up. Which is all the time, due to their nature. Even recent and relatively "good" models like Sonnet need constant babysitting.
GPT 5 failed spectacularly. So badly, in fact, that I'm glad I only set it to analysis tasks and not to any write tasks. I will not be using it for anything else any time soon.
Not 5 minutes ago I asked gpt5 how to go back to gpt-4o.
GPT5 was spitting out some strange bs for simple coding prompts that 4o handles well.
-
Have you given Qwen or GLM 4.5 a shot?
Not yet. I'll give them a shot if they promise never to say "you're absolutely correct" or give me un-requested summaries about how awesome they are in the middle of an unfinished task.
Actually, I have to give GPT 5 credit on one thing: It's actually sort of paying attention to the
copilot-instructions.md
file, because I put this snippet in it: "You don't celebrate half-finished features, and your summaries of what you've accomplished are not only rare, they're never more than five sentences long. You just get straight to the point." And - surprise, surprise - it has strictly followed that instruction.Fucks up everything else, though.
-
I tried GPT-5 to write some code the other day and was quite unimpressed with how lazy it is. For every single thing, it needed nudging. I'm going back to Sonnet and Gemini. And even so, you're right. As it stands, LLMs are useful at refactoring and writing boilerplate and repetitive code, which does save time. But they're definitely shit at actually solving non-trivial problems in code and designing and planning implementation at a high level.
They're basically a better IntelliSense and automated refactoring tool, but I wouldn't trust them with proper software engineering tasks. All this vibe coding and especially agentic development bullshit people (mainly uneducated users and the AI vendors themselves) are shilling these days, I'm going nowhere near around.
I work in a professional software development team in a business that is pushing the AI coding stuff really hard. So many of my coworkers use agentic development tools routinely now to do most (if not all) of their work for them. And guess what, every other PR that goes in, random features that had been built and working are removed entirely, so then we have to do extra work to literally build things again that had been ripped out by one of these AI agents. smh
In a similar situation. I'm even an AI proponent. I think it's a great tool when used properly. I've had great success solving basically trivial problems with small scripts. And code review is helpful. Code complete is helpful. It makes me faster, but you have to know when and how to leverage it.
Even on tasks it isn't good at, it often helps me frame my own thoughts. It can identify issues better than it can fix them. So if I say here is the current architecture, what is the best way to implement <feature> and explain why, it will give a plan. It may not be a great plan, but as it explains it, I can easily identify the stuff it has wrong. Sometimes it's close to a workable plan. Other times it's not. Other times it will confidently lead you down a rabbit hole. That's the real time waster.
"Why won't the context load for this unit test?"
You're missing this annotation.
"Yeah that didn't do it. What else."
You need this plugin.
"Yeah it's already there."
You need this other annotation.
"Okay that got a different error message."
You need another annotation
"That didn't work either. You don't actually know what the problem is do you?"
Sad computer beeps.
To just take the output and run with it is inviting disaster. It'll bite you every time and the harder the code the worse it performs.
-
I'm no longer even confident in modern LLMs to do stuff like convert a table schema or JSON document into a POCO. I tried this the other day with a field list from a table creation script. So it had to do was reformat the fields into a dumb C# model. Inexplicably it did fine except for omitting a random field in the middle of the list. Kinda shakes your confidence in LLMs for even the most basic programming tasks.
More and more, for tasks like that I simply will not use an LLM at all. I'll use a nice, predictable, deterministic script. Weirdly, LLMs are pretty decent at writing those.
-
Yeah, LLMs are decent with coding tasks if you know what you're doing and can properly guide it (and check it's work!), but fuck if they don't take a lot of effort to reign in. I will say they're pretty damned good at debugging the shit I wrote. I've been working on an audit project for a few months and 4o/5 have helped me a good bit to find persistent errors in my execution logic that I just kept missing on rereads and debug runs.
But new generation is painful. I had 5 generate a new function for me yesterday to do some issues recon and report generation, and I spent 20 minutes going back and forth with it dropping fields in the output repeatedly. Even on 5, it still struggles at times to not give you the same wrong answer more than once, or just waffles between wrong answers at times.
Dude forgetting stuff has to be one the most frustrating parts of the entire process . Like forgetting a column in a database or just an entire piece of a function you just pasted in... Or trying to change things you never asked it to touch. So freaking annoying. I had standing instructions in it's memory to not leave out pieces or modify things I didn't ask for and will put that stuff in the prompt and it just does not care lol.
I've used it a lot for coding because I'm not a real programmer (more a code hacker) and need to get things done for a website, but I know just enough to know it's really stupid sometimes lol.
-
Dude forgetting stuff has to be one the most frustrating parts of the entire process . Like forgetting a column in a database or just an entire piece of a function you just pasted in... Or trying to change things you never asked it to touch. So freaking annoying. I had standing instructions in it's memory to not leave out pieces or modify things I didn't ask for and will put that stuff in the prompt and it just does not care lol.
I've used it a lot for coding because I'm not a real programmer (more a code hacker) and need to get things done for a website, but I know just enough to know it's really stupid sometimes lol.
Dude forgetting stuff has to be one the most frustrating parts of the entire process . Like forgetting a column in a database or just an entire piece of a function you just pasted in
It was actually worse. I was pulling data out of local logs and processing events. I asked to assess a couple columns that I was struggling to parse properly, and it got those ones in, but dropped some of my existing columns. I pointed out the error, it acknowledged the issue, then spat out code that reverted to the first output!
Though, that wasn't nearly as bad as it telling me that a variable a couple hundred lines and multiple transformations in wasn't being populated by an early variable, and I literally went in and just copied each declaration line and sent it back like I was smacking an intern on the nose or something....
For a bit designed to read and analyze text, it is surprisingly bad at the whole 'reading' aspect. But maybe that's just how human like the intelligence is /s
Or trying to change things you never asked it to touch. So freaking annoying. I had standing instructions in it's memory to not leave out pieces or modify things I didn't ask for and will put that stuff in the prompt and it just does not care lol
OMFG this. I've had decent luck recently after setting up a project and explicitly laying out a number of global directives, because yeah, it was awful trying to figure out exactly what changed when I diff the input and output, and fucking everything is red because even the goddamned comments are changed. But even just trying to make it understand basic style requirements was a solid half hour of arguing with it (only partially because I forgot the proper names of casings) so it wouldn't make me lint the whole goddamned script I just told it to analyze and fix one item.
-
Yeah, LLMs are decent with coding tasks if you know what you're doing and can properly guide it (and check it's work!), but fuck if they don't take a lot of effort to reign in. I will say they're pretty damned good at debugging the shit I wrote. I've been working on an audit project for a few months and 4o/5 have helped me a good bit to find persistent errors in my execution logic that I just kept missing on rereads and debug runs.
But new generation is painful. I had 5 generate a new function for me yesterday to do some issues recon and report generation, and I spent 20 minutes going back and forth with it dropping fields in the output repeatedly. Even on 5, it still struggles at times to not give you the same wrong answer more than once, or just waffles between wrong answers at times.
Out of curiosity, do you feel that you would have been able to write that new function without an LLM in less time than you spent fighting GPT5?
-
In a similar situation. I'm even an AI proponent. I think it's a great tool when used properly. I've had great success solving basically trivial problems with small scripts. And code review is helpful. Code complete is helpful. It makes me faster, but you have to know when and how to leverage it.
Even on tasks it isn't good at, it often helps me frame my own thoughts. It can identify issues better than it can fix them. So if I say here is the current architecture, what is the best way to implement <feature> and explain why, it will give a plan. It may not be a great plan, but as it explains it, I can easily identify the stuff it has wrong. Sometimes it's close to a workable plan. Other times it's not. Other times it will confidently lead you down a rabbit hole. That's the real time waster.
"Why won't the context load for this unit test?"
You're missing this annotation.
"Yeah that didn't do it. What else."
You need this plugin.
"Yeah it's already there."
You need this other annotation.
"Okay that got a different error message."
You need another annotation
"That didn't work either. You don't actually know what the problem is do you?"
Sad computer beeps.
To just take the output and run with it is inviting disaster. It'll bite you every time and the harder the code the worse it performs.
This has been my experience as well, only the company I work for has mandated that we must use AI tools everyday (regardless of whether we want/need them) and is actively tracking our usage to make sure we comply.
My productivity has plummeted. The tool we use (Cursor) requires so much hand-holding that it's like having a student dev with me at all times... only a real student would actually absorb information and learn over time, unlike this glorified Markov Chain. If I had a human junior dev, they could be a productive and semi-competent coder in 6 months. But 6 months from now, the LLM is still going to be making all of the same mistakes it is now.
It's gotten to the point where I ask the LLM to solve a problem for me just so that I can hit the required usage metrics, but completely ignore its output. And it makes me die a little bit inside every time I consider how much water/energy I'm wasting for literally zero benefit.
-
Dude forgetting stuff has to be one the most frustrating parts of the entire process . Like forgetting a column in a database or just an entire piece of a function you just pasted in
It was actually worse. I was pulling data out of local logs and processing events. I asked to assess a couple columns that I was struggling to parse properly, and it got those ones in, but dropped some of my existing columns. I pointed out the error, it acknowledged the issue, then spat out code that reverted to the first output!
Though, that wasn't nearly as bad as it telling me that a variable a couple hundred lines and multiple transformations in wasn't being populated by an early variable, and I literally went in and just copied each declaration line and sent it back like I was smacking an intern on the nose or something....
For a bit designed to read and analyze text, it is surprisingly bad at the whole 'reading' aspect. But maybe that's just how human like the intelligence is /s
Or trying to change things you never asked it to touch. So freaking annoying. I had standing instructions in it's memory to not leave out pieces or modify things I didn't ask for and will put that stuff in the prompt and it just does not care lol
OMFG this. I've had decent luck recently after setting up a project and explicitly laying out a number of global directives, because yeah, it was awful trying to figure out exactly what changed when I diff the input and output, and fucking everything is red because even the goddamned comments are changed. But even just trying to make it understand basic style requirements was a solid half hour of arguing with it (only partially because I forgot the proper names of casings) so it wouldn't make me lint the whole goddamned script I just told it to analyze and fix one item.
Yessir I've basically run into all of that. It's fucking infuriating. It really is like talking to a toddler at times. There seems to be a limit to the complexity of what it can process before it just starts messing everything up. Like once you hit its limit, it will not process the entire thing no matter how many times you fix it together like your example. You fix one problem and then it just forgets a different piece. FFFFFFFFFF.
-
This has been my experience as well, only the company I work for has mandated that we must use AI tools everyday (regardless of whether we want/need them) and is actively tracking our usage to make sure we comply.
My productivity has plummeted. The tool we use (Cursor) requires so much hand-holding that it's like having a student dev with me at all times... only a real student would actually absorb information and learn over time, unlike this glorified Markov Chain. If I had a human junior dev, they could be a productive and semi-competent coder in 6 months. But 6 months from now, the LLM is still going to be making all of the same mistakes it is now.
It's gotten to the point where I ask the LLM to solve a problem for me just so that I can hit the required usage metrics, but completely ignore its output. And it makes me die a little bit inside every time I consider how much water/energy I'm wasting for literally zero benefit.
That sounds horrific. Maybe you can ask the AI to write a plugin that automatically invokes the AI in the background and throws away the result.
We are strongly encouraged to use the tools, and copilot review is automatic, but that's it. I'm actually about to accept a leadership position in another AI heavy company and hopefully I can leverage that position to guide a sensible AI policy.
But at the heart of it, I need curious minds that want to learn. Give me those and I can build a strong team with or without AI. Without them, all the AI in the world won't help.
-
-
Schools are using AI to spy on students and some are getting arrested for misinterpreted jokes and private conversations
Technology1
-
-
-
-
-
Microsoft no longer permits local Windows 10 accounts if you want Consumer Extended Security Updates — support beyond EOL requires a Microsoft Account link-up even if you pay $30
Technology1
-