AI is outperforming humans in both IQ and creativity in 2021

Well, one big difference there is that domesiticated animals can't write or interpret laws, or write dictionaries. AI's can and do both. There is also an argument that intelligent animals can be "slaves" but we are getting into the realms of philosophy and current affairs then, and it isn't relevant to this discussion anyway. My point was a response to the question of "where would it get income", and that an AI can easily earn money. The question is whether or not it can keep it. If it is no more than an "engine", then it currently cannot.
 
@Cthulhu.Science Your last comment also makes me think of "Crocodile" and "The Entire History of You" from Black Mirror. Not to be paranoid, but reality is starting to sound that as well.
 

In a poll asking when people think superintelligent general AI (AGI) will arrive, 60% of voters said they think it will be in the next ten years.
Less than 20% think it will take more than 20 years.
This poll, however, could have been taken anytime in the last fifty years and reached the same result. True AI has always been just ten years away. In ten years time, it will still be ten years away.
 
This poll, however, could have been taken anytime in the last fifty years and reached the same result. True AI has always been just ten years away. In ten years time, it will still be ten years away.
No , it hasn't. We have a rough estimate of the computing power of the brain : 100 to 1,000 teraflops.
Even accounting for redundancy we need at least 20 teraflops of computing power to get AI.
The same goes for memory we need at least 128 GB of RAM to get any decent AIG.
Ten , twenty or thirty years ago we didn't have the hardware.
 
'"In 1954 a Georgetown-IBM team predicted that language translation programs would be perfected in three to five years. In 1965 Herbert Simon said that “machines will be capable, within twenty years, of doing any work a man can do.” In 1970 Marvin Minsky told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.”'

'In 2014, Ray Kurzweil predicted that by 2029, computers will have human-level intelligence and will have all of the intellectual and emotional capabilities of humans, including “the ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy.” As we move closer to 2029, Kurzweil talks more about 2045.'

'In a 2009 TED talk, Israeli neuroscientist Henry Markram said that within a decade his research group would reverse-engineer the human brain by using a supercomputer to simulate the brain’s 86 billion neurons and 100 trillion synapses.'

Why Ambitious Predictions About A.I. Are Always Wrong
 
'"In 1954 a Georgetown-IBM team predicted that language translation programs would be perfected in three to five years. In 1965 Herbert Simon said that “machines will be capable, within twenty years, of doing any work a man can do.” In 1970 Marvin Minsky told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.”'

'In 2014, Ray Kurzweil predicted that by 2029, computers will have human-level intelligence and will have all of the intellectual and emotional capabilities of humans, including “the ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy.” As we move closer to 2029, Kurzweil talks more about 2045.'

'In a 2009 TED talk, Israeli neuroscientist Henry Markram said that within a decade his research group would reverse-engineer the human brain by using a supercomputer to simulate the brain’s 86 billion neurons and 100 trillion synapses.'

Why Ambitious Predictions About A.I. Are Always Wrong
Exactly. So much hype and bulls**t. And its not just in the field of AI. All areas of science and engineering are blighted by this. An idiot like Elon Musk can say we are going to have cities on Mars in a few short years and folks actually take him seriously.
 
'"In 1954 a Georgetown-IBM team predicted that language translation programs would be perfected in three to five years. In 1965 Herbert Simon said that “machines will be capable, within twenty years, of doing any work a man can do.” In 1970 Marvin Minsky told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.”'

'In 2014, Ray Kurzweil predicted that by 2029, computers will have human-level intelligence and will have all of the intellectual and emotional capabilities of humans, including “the ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy.” As we move closer to 2029, Kurzweil talks more about 2045.'

'In a 2009 TED talk, Israeli neuroscientist Henry Markram said that within a decade his research group would reverse-engineer the human brain by using a supercomputer to simulate the brain’s 86 billion neurons and 100 trillion synapses.'

Why Ambitious Predictions About A.I. Are Always Wrong
Well, back in 1989 I made a rough estimate of the brain's processing power. So, assuming computing power ( and memory ) doubled every 2 years I expected AGI no sooner than 2023 ( I have to add the memory growth actually stalled and grew at a slower pace about 8 years ago). There could be efficiency improvements: crows are extremely intelligent creatures for such a small brain; but those improvements are hard to find, which takes research time, pushing the date further.
Every prediction that stated AGI was achievable before 2023 is wrong because AGI requires a lot of memory ,computing power ,and researchers ( hence the need for massive amounts of cheap, generally available computing power). So now we have all the hardware and thousands of researchers, so the prediction is feasible.
From the functional perspective we already have:
- Object and character recognition.
- Speech recognition
- Spatial navigation
- Language semantics
- Image generation.
- Voice synthesis.
- General planning and strategy.
- Learning.
The missing pieces are first-order logic and symbolic manipulation. Further down the road, in order to have artificial consciousness we need to advance a lot in neural network explainability, which would give machines the capacity to analyze an interpret their own mind-state. So ten to fifteen years seems a reasonable timeline now (2033 - 2038).
 
Exactly. So much hype and bulls**t. And its not just in the field of AI. All areas of science and engineering are blighted by this. An idiot like Elon Musk can say we are going to have cities on Mars in a few short years and folks actually take him seriously.

I don't know how many scientists make unfounded claims to the public. I remember listening to a radio broadcast (Quirks and Quarks, a great science show), quite a few years ago about how more and more scientists are reluctant to talk to journalists regarding their research, since their work is misrepresented (sensationalized) to the public.
 
The missing pieces are first-order logic and symbolic manipulation. Further down the road, in order to have artificial consciousness we need to advance a lot in neural network explainability, which would give machines the capacity to analyze an interpret their own mind-state. So ten to fifteen years seems a reasonable timeline now (2033 - 2038).
You're making the same mistake everyone else makes by assuming consciousness has something to do with processing power. It doesn't.
 
There is a great deal of difference (as any RPG player would tell you!) between intelligence and wisdom. Artificial intelligence is already super intelligent. It knows far more, and has instant access to it. But there is deal of difference between knowing a recipe, and being able to create the perfect meal. Between knowing what love is and knowing how it feels to love. Between knowing about literature, and being able to write a story that will resonate with the reader.
 
You're making the same mistake everyone else makes by assuming consciousness has something to do with processing power. It doesn't.


This is correct. We consider ourselves to be conscious beings, but what that actually means, and how it comes about, is less obvious. So how can we know how to go about instilling it in other things?
 
I don't know how many scientists make unfounded claims to the public. I remember listening to a radio broadcast (Quirks and Quarks, a great science show), quite a few years ago about how more and more scientists are reluctant to talk to journalists regarding their research, since their work is misrepresented (sensationalized) to the public.


They've got jobs to do and families to feed. Saying anything outside what is accepted fact jeopardises both. Questioning known 'facts' - part of many scientist's day to day job, can - in the wrong hands - be turned into sensationalism. Which is great if you want to get your name in the news or promote a book, but not what most people want to do when they are just trying to earn a living.
 
They've got jobs to do and families to feed. Saying anything outside what is accepted fact jeopardises both. Questioning known 'facts' - part of many scientist's day to day job, can - in the wrong hands - be turned into sensationalism. Which is great if you want to get your name in the news or promote a book, but not what most people want to do when they are just trying to earn a living.
But often their funding, or investment in the companies they represent, depends upon them making a splash in the media
 
But often their funding, or investment in the companies they represent, depends upon them making a splash in the media


True, and I think the same applies to agencies such as NASA. Create public awareness and interest, and you'll also attract the support of those who fund you, especially if it helps increase their popularity.
 
Last edited:
True, and I think the same applies to agencies such as NASA. Create public awareness and interest, and you'll also attract the support of those who fund you, especially if it helps increase their popularity.
But at the same time they know that if they over hype/over promise it will end up working against them. I'm sure most of the over promising attributed to scientists is actually media over promising based on a probably much more cautious annoucement from the scientists. Most of the time that is.
 
You're making the same mistake everyone else makes by assuming consciousness has something to do with processing power. It doesn't.
It depends on your definition of consciousness. If we use the definition on Wiki (which I think is a good one), I'd argue that 'processing power' (of the brain) plays a role in our consciousness experience.

From Wiki: "Consciousness, at its simplest, is sentience and awareness of internal and external existence"

'Awareness of internal existence', depends on how the brain processes data. Our thoughts (awareness of internal existence) are dependant upon how smart/intellectual we are, which is directedly related to processing power. To quote Socrates: “Strong minds discuss ideas, average minds discuss events, weak minds discuss people.” I'd assume that most, if not all of those who frequent these types of forums belong in the first category. A 'strong mind' is one that has more processing power than those with 'weak minds', and it's the processing power that determines how deeply we can understand a subject, which is no doubt related to our interest in it (intellect). It's also why other organisms, such as dolphins can only reach the intelligence of an average 5 year old, since they don't have the 'processing power' to go beyond that level of intelligence.

Obviously, processing power isn't the totality of one's conscious experience, however it's definitely related to it.
 

Back
Top