The Chinese models are locally hostable. This does not, and cannot count entities self hosting the models privately.
The research posted by American AI companies (other than huggingface and a few startups) is pretty much a nothing burger.
This is what I keep trying to tell everyone. It’s not US vs China nor AI vs no AI, the real battle is corporate APIs vs augmented, locally hosted, open weights and open research models.
I hope the future is specialized models on smartphones, occasionally augmented by remote APIs. And that has a lot of gravity because, once set up, the calls are basically free.
And AMD/Nvidia are still relevant in that future because they’ll likely be the one training models, at least.