Skip to content

Why doesn't Nvidia have more competition?

Technology
22 14 87
  • The ‘enthusiast’ side where all the university students and tinkerer devs reside is totally screwed up though. AMD is mirroring Nvidia’s VRAM cartel pricing when they have absolutely no reason to. It’s completely bonkers. AMD would be in a totally different place right now if they had sold 40GB/48GB 7900s for an extra $200 (instead of price matching an A6000).

    Eh, the biggest issue here is that most (post-secondary) students probably just have a laptop for whatever small GPGPU learning they're doing, which is overwhelmingly dominated by Nvidia. For grad students they'll have access to the institution resources, which is also dominated by Nvidia (this has been a concerted effort).

    Only a few that explicitly pursue AMD hardware will end up with it, but that also requires significant foundational work for the effort. So the easiest path for research is throw students at CUDA and Nvidia hardware.

    Basically, Nvidia has entrenched itself in the research/educational space, and that space is slow moving (Java is still the de facto CS standard, with only slow movements to Python happening at some universities), so I don't see much changing, unless AMD decides it's very hungry and wants to chase the market.

    Lower VRAM prices could help, but the truth is people and intuitions are willing to pay more (obviously) for plug and play.

    I dunno. From my more isolated perspective on GitHub and small LLM testing circles, I see a lot of 3090s, 4090s, sometimes arrays of 3060s/3090s or old P40s or MI50s, which people got basically for the purpose of experimentation and development because they can't drop (or at least justify) $5K.

    They would 100% drop that money on at least one 7900 48GB instead (as the sheer capacity is worth it over the speed hit and finickiness), and then do a whole bunch of bugfixing/testing on them. I know I would. Hence the Framework Strix Halo thing is sold out even though it's... rather compute-lite compared to a 3090+ GPU.

    It seems like a tiny market, but a lot of the frameworks/features/models being developed by humble open source devs filter up to the enterprise space. You'd absolutely see more enterprise use once the toolkits were hammered out on desktops... But they aren't, because AMD gives us no incentive to do so. A 7900 is just not worth the trouble over a 3090/4090 if its VRAM capacity is the same, and this (more or less) extends up and down the price ranges.

  • This post did not contain any content.

    It’s funny how the article asks the question, but completely fails to answer it.

    About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market.

    AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate.

    Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product.
    So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD.

    But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up.

    Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to.

    AMD and Intel were the obvious competitors, everybody else is coming from even further behind.
    But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC.
    China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now.

    IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.