Why doesn't Nvidia have more competition?
-
This post did not contain any content.
-
This post did not contain any content.
What a shit article. Doesn't explain the software situation. While CUDA is the most popular, a lot of frameworks do support AMD chips.
-
What a shit article. Doesn't explain the software situation. While CUDA is the most popular, a lot of frameworks do support AMD chips.
Naji said the firm has also “developed the broadest ecosystem” of developers and software.
“And so it's just so much easier to … build an application, build an AI model on top of those chips,” he said.
-
This post did not contain any content.
Because its competitors care about Not Invented Here instead of building common industry standards.
-
What a shit article. Doesn't explain the software situation. While CUDA is the most popular, a lot of frameworks do support AMD chips.
A comically bad "article".
-
Naji said the firm has also “developed the broadest ecosystem” of developers and software.
“And so it's just so much easier to … build an application, build an AI model on top of those chips,” he said.
Expounding, Nvidia has very deeply engrained itself in educational and research institutions. People learning GPU compute are being taught CUDA and Nvidia hardware. Researchers have access to farms of Nvidia chips.
AMD has basically gone the "build it and they will come" attitude, and the results to match.
-
Naji said the firm has also “developed the broadest ecosystem” of developers and software.
“And so it's just so much easier to … build an application, build an AI model on top of those chips,” he said.
It's literally the most surface level take. Does not even mention what CUDA is or AMD's efforts to run it
https://www.xda-developers.com/nvidia-cuda-amd-zluda/
But it is no longer funded by AMD or Intel
AMD GPUs are still supported by frameworks like PyTorch
While Nvidia might be the fastest, they are not always the cheapest option, especially if you rent it in the cloud. When I last checked, it was cheaper to rent AMD GPUs
-
This post did not contain any content.
Corporate consolidation and monopoly/oligopoly is usually why.
-
This post did not contain any content.
How to create a successful GPU company in 2025:
- Step 1: build a time machine and go back 30 years
-
Because its competitors care about Not Invented Here instead of building common industry standards.
Well, Intel tried with OneApi. As for AMD they go 5 minutes between every time they shoot themselves in the foot. It's unbelievable to watch.
-
Well, Intel tried with OneApi. As for AMD they go 5 minutes between every time they shoot themselves in the foot. It's unbelievable to watch.
Intel tried with OpenAPI because ROCm was not invented here.
-
Expounding, Nvidia has very deeply engrained itself in educational and research institutions. People learning GPU compute are being taught CUDA and Nvidia hardware. Researchers have access to farms of Nvidia chips.
AMD has basically gone the "build it and they will come" attitude, and the results to match.
AMD has basically gone the “build it and they will come” attitude
Except they didn't.
They repeatedly fumble the software with little mistakes (looking at you, Flash Attention). They price the MI300X, W7900, and any high VRAM GPU through the roof, when they have every reason to be more competitive and undercut Nvidia. They have sad, incomplete software efforts divorced from what devs are actually doing, like their quantization framework or some inexplicably bad LLMs they trained themself. I think Strix Halo is the only GPU compute thing they did half right recently, and they still screwed that up.
They give no one any reason to give them a chance, and wonder why no one comes. Lisa Su could fix this with literally like three phone calls (remove VRAM restrictions on their OEMs, cut pro card prices, fix stupid small bugs in ROCM), but she doesn't. It's inexplicable.
-
How to create a successful GPU company in 2025:
- Step 1: build a time machine and go back 30 years
What do i do before I did that?
-
What do i do before I did that?
Have rich parents, pale skin, etc.
-
Have rich parents, pale skin, etc.
No that comes afterwards
-
AMD has basically gone the “build it and they will come” attitude
Except they didn't.
They repeatedly fumble the software with little mistakes (looking at you, Flash Attention). They price the MI300X, W7900, and any high VRAM GPU through the roof, when they have every reason to be more competitive and undercut Nvidia. They have sad, incomplete software efforts divorced from what devs are actually doing, like their quantization framework or some inexplicably bad LLMs they trained themself. I think Strix Halo is the only GPU compute thing they did half right recently, and they still screwed that up.
They give no one any reason to give them a chance, and wonder why no one comes. Lisa Su could fix this with literally like three phone calls (remove VRAM restrictions on their OEMs, cut pro card prices, fix stupid small bugs in ROCM), but she doesn't. It's inexplicable.
That's basically what I said in so many words.
AMD is doing its own thing, if you want what Nvidia offers you're gonna have to build it yourself.
WRT pricing, I'm pretty sure AMD is typically a fraction of the price of Nvidia hardware on the enterprise side, from what I've read, but companies that have made that leap have been unhappy since AMD's GPU enterprise offerings were so unreliable.The biggest culprit from what I can gather is that AMD's GPU firmware/software side is basically still ATI camped up in Markham, divorced from the rest of the company in Austin that is doing great work with their CPU-side.
-
This post did not contain any content.
People like youz starts snoopin' around askin' questions tends to fall outta windows, y'know what I'm sayin'?
-
That's basically what I said in so many words.
AMD is doing its own thing, if you want what Nvidia offers you're gonna have to build it yourself.
WRT pricing, I'm pretty sure AMD is typically a fraction of the price of Nvidia hardware on the enterprise side, from what I've read, but companies that have made that leap have been unhappy since AMD's GPU enterprise offerings were so unreliable.The biggest culprit from what I can gather is that AMD's GPU firmware/software side is basically still ATI camped up in Markham, divorced from the rest of the company in Austin that is doing great work with their CPU-side.
WRT pricing, I’m pretty sure AMD is typically a fraction of the price of Nvidia hardware on the enterprise side
I'm not as sure about this, but seems like AMD is taking a fat margin on the MI300X (and its sucessor?), and kinda ignoring the performance penalty. It's easy to say "build it yourself!" but the reality is very few can, or will, do this, and will simply try to deploy vllm or vanilla TRL or something as best they can (and run into the same issues everyone does).
The 'enthusiast' side where all the university students and tinkerer devs reside is totally screwed up though. AMD is mirroring Nvidia's VRAM cartel pricing when they have absolutely no reason to. It's completely bonkers. AMD would be in a totally different place right now if they had sold 40GB/48GB 7900s for an extra $200 (instead of price matching an A6000).
The biggest culprit from what I can gather is that AMD’s GPU firmware/software side is basically still ATI camped up in Markham, divorced from the rest of the company in Austin that is doing great work with their CPU-side.
Yeah, it does seem divorced from the CPU division. But a lot of the badness comes from business decisions, even when the silicon is quite good, and some of that must be from Austin.
-
This post did not contain any content.
At first I was going to say there is ATI.
Then I realized I hadn't heard about ATI in a while and looked up what happened to it.
Then I realized... I'm old. -
WRT pricing, I’m pretty sure AMD is typically a fraction of the price of Nvidia hardware on the enterprise side
I'm not as sure about this, but seems like AMD is taking a fat margin on the MI300X (and its sucessor?), and kinda ignoring the performance penalty. It's easy to say "build it yourself!" but the reality is very few can, or will, do this, and will simply try to deploy vllm or vanilla TRL or something as best they can (and run into the same issues everyone does).
The 'enthusiast' side where all the university students and tinkerer devs reside is totally screwed up though. AMD is mirroring Nvidia's VRAM cartel pricing when they have absolutely no reason to. It's completely bonkers. AMD would be in a totally different place right now if they had sold 40GB/48GB 7900s for an extra $200 (instead of price matching an A6000).
The biggest culprit from what I can gather is that AMD’s GPU firmware/software side is basically still ATI camped up in Markham, divorced from the rest of the company in Austin that is doing great work with their CPU-side.
Yeah, it does seem divorced from the CPU division. But a lot of the badness comes from business decisions, even when the silicon is quite good, and some of that must be from Austin.
The ‘enthusiast’ side where all the university students and tinkerer devs reside is totally screwed up though. AMD is mirroring Nvidia’s VRAM cartel pricing when they have absolutely no reason to. It’s completely bonkers. AMD would be in a totally different place right now if they had sold 40GB/48GB 7900s for an extra $200 (instead of price matching an A6000).
Eh, the biggest issue here is that most (post-secondary) students probably just have a laptop for whatever small GPGPU learning they're doing, which is overwhelmingly dominated by Nvidia. For grad students they'll have access to the institution resources, which is also dominated by Nvidia (this has been a concerted effort).
Only a few that explicitly pursue AMD hardware will end up with it, but that also requires significant foundational work for the effort. So the easiest path for research is throw students at CUDA and Nvidia hardware.
Basically, Nvidia has entrenched itself in the research/educational space, and that space is slow moving (Java is still the de facto CS standard, with only slow movements to Python happening at some universities), so I don't see much changing, unless AMD decides it's very hungry and wants to chase the market.
Lower VRAM prices could help, but the truth is people and intuitions are willing to pay more (obviously) for plug and play.
-
China’s Next-Gen TV Anchors Hustle for Jobs AI Already Does: The rise of AI in broadcasting is pushing China’s top journalism schools to rethink what skills still set human anchors apart.
Technology1
-
-
Four teams of Humanoid Robots faced off in a fully autonomous 3-on-3 Football game powered entirely by Artificial Intelligence in Beijing on Saturday night.
Technology1
-
-
Scientists spot ‘superorganism’ in the wild for the first time and it’s made of worms, In a groundbreaking discovery, scientists have observed nematodes, tiny worms, forming 'living towers' in nature
Technology1
-
-
-