Bubble Trouble
-
This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?
There are major issues ahead and im not sure they can be solved. Read the article.
Bubble Trouble
As I previously warned, artificial intelligence companies are running out of data. A Wall Street Journal piece from this week has sounded the alarm that some believe AI models will run out of "high-quality text-based data" within the next two years in what an AI researcher called "a frontier research
Ed Zitron's Where's Your Ed At (www.wheresyoured.at)
-
This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?
There are major issues ahead and im not sure they can be solved. Read the article.
Bubble Trouble
As I previously warned, artificial intelligence companies are running out of data. A Wall Street Journal piece from this week has sounded the alarm that some believe AI models will run out of "high-quality text-based data" within the next two years in what an AI researcher called "a frontier research
Ed Zitron's Where's Your Ed At (www.wheresyoured.at)
My company is in AI. One of our customers pays for systems capable of the hard computational work to design the drugs to treat Parkinson's. This is the only newly possible with the newest technology.
-
This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?
There are major issues ahead and im not sure they can be solved. Read the article.
Bubble Trouble
As I previously warned, artificial intelligence companies are running out of data. A Wall Street Journal piece from this week has sounded the alarm that some believe AI models will run out of "high-quality text-based data" within the next two years in what an AI researcher called "a frontier research
Ed Zitron's Where's Your Ed At (www.wheresyoured.at)
Interesting: https://arxiv.org/pdf/2305.17493
""THE CURSE OF RECURSION:
TRAINING ON GENERATED DATA MAKES MODELS FORGET"A great read on the referenced paper.
-
This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?
There are major issues ahead and im not sure they can be solved. Read the article.
Bubble Trouble
As I previously warned, artificial intelligence companies are running out of data. A Wall Street Journal piece from this week has sounded the alarm that some believe AI models will run out of "high-quality text-based data" within the next two years in what an AI researcher called "a frontier research
Ed Zitron's Where's Your Ed At (www.wheresyoured.at)
I wonder if AI applications other than just "be a generalist chat bot" would run into the same thing. I'm thinking about pharma, weather prediction, etc. They would still have to "understand" their english-language prompts, but the LLMs can do that just fine today, and could feed systems designed to iteratively solve for problems in those areas. A model feeding into itself or other models doesn't have to be a bad thing.
-
I wonder if AI applications other than just "be a generalist chat bot" would run into the same thing. I'm thinking about pharma, weather prediction, etc. They would still have to "understand" their english-language prompts, but the LLMs can do that just fine today, and could feed systems designed to iteratively solve for problems in those areas. A model feeding into itself or other models doesn't have to be a bad thing.
Only in the sense that those “words” they know are pointers to likely connected words. If the concepts follow alike then, theoretically all good. But beyond FAQs and such I’m not seeing anything that would indicate it’s ready for anything more.
-
My company is in AI. One of our customers pays for systems capable of the hard computational work to design the drugs to treat Parkinson's. This is the only newly possible with the newest technology.
This is acutally one of the most promising applications - AI can screen millions of potential drug compounds and predict protein interactions in hours instead of months, which is why we're seeing breakthroughs in neurodegenerative disease research.
-
This is acutally one of the most promising applications - AI can screen millions of potential drug compounds and predict protein interactions in hours instead of months, which is why we're seeing breakthroughs in neurodegenerative disease research.
That's probably Machine Learning, the root category of tools and the origin of LLMs, not Large Language Models themselves we call 'AI'. These have many applications they are efficient at gradually explored from the 80s I believe, while the AI boom involving Google, Meta, OpenAI and others is about generalistic chatbots that are bad in just about everything they used in. I'm putting that distinction not because I'm an ass, but because I don't want the hype wave to get more credibility on the back of real scientifical and technological progress.
-
That's probably Machine Learning, the root category of tools and the origin of LLMs, not Large Language Models themselves we call 'AI'. These have many applications they are efficient at gradually explored from the 80s I believe, while the AI boom involving Google, Meta, OpenAI and others is about generalistic chatbots that are bad in just about everything they used in. I'm putting that distinction not because I'm an ass, but because I don't want the hype wave to get more credibility on the back of real scientifical and technological progress.
It depends, if they're use a transformer or diffusion based archetecture I think it would be fair to include it in the same "AI wave" thats been breaking since the release of chat gpt publicly.