Bubble Trouble
-
This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?
There are major issues ahead and im not sure they can be solved. Read the article.
-
This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?
There are major issues ahead and im not sure they can be solved. Read the article.
My company is in AI. One of our customers pays for systems capable of the hard computational work to design the drugs to treat Parkinson's. This is the only newly possible with the newest technology.
-
This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?
There are major issues ahead and im not sure they can be solved. Read the article.
Interesting: https://arxiv.org/pdf/2305.17493
""THE CURSE OF RECURSION:
TRAINING ON GENERATED DATA MAKES MODELS FORGET"A great read on the referenced paper.
-
This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?
There are major issues ahead and im not sure they can be solved. Read the article.
I wonder if AI applications other than just "be a generalist chat bot" would run into the same thing. I'm thinking about pharma, weather prediction, etc. They would still have to "understand" their english-language prompts, but the LLMs can do that just fine today, and could feed systems designed to iteratively solve for problems in those areas. A model feeding into itself or other models doesn't have to be a bad thing.
-
I wonder if AI applications other than just "be a generalist chat bot" would run into the same thing. I'm thinking about pharma, weather prediction, etc. They would still have to "understand" their english-language prompts, but the LLMs can do that just fine today, and could feed systems designed to iteratively solve for problems in those areas. A model feeding into itself or other models doesn't have to be a bad thing.
Only in the sense that those “words” they know are pointers to likely connected words. If the concepts follow alike then, theoretically all good. But beyond FAQs and such I’m not seeing anything that would indicate it’s ready for anything more.