This man doesn’t even know the difference between AGI and a text generation program, so it doesn’t surprise me he couldn’t tell the difference between that program and real, living human beings.
He also seems to have deleted his LinkedIn account.
Microsoft defines AGI in contracts in dollars of earnings…
If you’d travel in time 5 years back and show the currently best GPT to someone, he/she would probably accept it as AGI.
I’ve seen multiple experts in German television explaining that LLMs will reach the AGI state within a few years…
(That does not mean that the CEO guy isn’t a fool. Let’s wait for the first larger problem that requires not writing new code, but rather dealing with a bug, something not documented, or similar…)
LLMs can’t become AGIs. They have no ability to actually reason. What they can do is use predigested reasoning to fake it. It’s particularly obvious with certain classes of proble., when they fall down. I think the fact it fakes so well tells us more about human intelligence than AI.
That being said, LLMs will likely be a critical part of a future AGI. Right now, they are a lobotomised speech centre. Different groups are already starting to tie them to other forms of AI. If we can crack building a reasoning engine, then a full AGI is possible. An LLM might even form its internal communication method, akin to our internal monologue.
While I haven’t read the paper, the comment’s explanation seems to make sense. It supposedly contains a mathematical proof that making AGI from a finite dataset is a NP-hard problem. I have to read it and parse out the reasoning, if true, it would make for a great argument in cases like these.
Call everything you have ever experienced the finite dataset.
Constructing your brain from dna works in a timely manner.
Then training it does too, you get visibly smarter with time, so on a linear scale.
I think part of the problem is that LLMs stop learning at the end of the training phase, while a human never stops taking in new information.
Part of why I think AGI is so far away is because to run the training in real-time like a human, it would take more compute than currently exists. They should be focusing on doing more with less compute to find new more efficient algorithms and architectures, not throwing more and more GPUs at the problem. Right now 10x the GPUs gets you like 5-10% better accuracy on whatever benchmarks, which is not a sustainable direction to go.
This man doesn’t even know the difference between AGI and a text generation program, so it doesn’t surprise me he couldn’t tell the difference between that program and real, living human beings.
He also seems to have deleted his LinkedIn account.
AGI is currently just a buzzword anyway…
Microsoft defines AGI in contracts in dollars of earnings…
If you’d travel in time 5 years back and show the currently best GPT to someone, he/she would probably accept it as AGI.
I’ve seen multiple experts in German television explaining that LLMs will reach the AGI state within a few years…
(That does not mean that the CEO guy isn’t a fool. Let’s wait for the first larger problem that requires not writing new code, but rather dealing with a bug, something not documented, or similar…)
LLMs can’t become AGIs. They have no ability to actually reason. What they can do is use predigested reasoning to fake it. It’s particularly obvious with certain classes of proble., when they fall down. I think the fact it fakes so well tells us more about human intelligence than AI.
That being said, LLMs will likely be a critical part of a future AGI. Right now, they are a lobotomised speech centre. Different groups are already starting to tie them to other forms of AI. If we can crack building a reasoning engine, then a full AGI is possible. An LLM might even form its internal communication method, akin to our internal monologue.
While I haven’t read the paper, the comment’s explanation seems to make sense. It supposedly contains a mathematical proof that making AGI from a finite dataset is a NP-hard problem. I have to read it and parse out the reasoning, if true, it would make for a great argument in cases like these.
https://lemmy.world/comment/14174326
If that is true, how does the brain work?
Call everything you have ever experienced the finite dataset.
Constructing your brain from dna works in a timely manner.
Then training it does too, you get visibly smarter with time, so on a linear scale.
I think part of the problem is that LLMs stop learning at the end of the training phase, while a human never stops taking in new information.
Part of why I think AGI is so far away is because to run the training in real-time like a human, it would take more compute than currently exists. They should be focusing on doing more with less compute to find new more efficient algorithms and architectures, not throwing more and more GPUs at the problem. Right now 10x the GPUs gets you like 5-10% better accuracy on whatever benchmarks, which is not a sustainable direction to go.