Is this Google AI Engineer Crazy or Not?

From the Science daily article, it says "We used to think of each neuron as a sort of whistle, which either toots, or doesn't," Prof. Schiller explains. "Instead, we are looking at a piano. Its keys can be struck simultaneously, or in sequence, producing an infinity of different tunes."

This certainly sounds like how quantum computers are supposed to work, which would explain how any brain has so much computing power for such a tiny size and minimal power requirements.

This makes it difficult to compare a quantum run program with one run with digital logic.

Looking at it metaphorically, a quantum computer uses calculus to eloquently compute results while a digital machine uses long hand algebra and geometry to crudely perform similar computations. If a digital machine is big enough, it could mimic a quantum machine's results, but mimicking is not always the thing as the original.
 
So, the consensus seems to be:
i) we can tell for certain that this (obviously) isn't a sentient AI; but,
ii) we probably wouldn't be able to recognize it, even if it was?

It's true. It would be hard for an AI to achieve that sort of illogic. ;-)
 
I beg to differ about how well we understand consciousness.
With what post are you differing?

I don't know if it's entirely clear what he's saying. I interpret it to mean that, to him, the responses the programme gives suggest a mind is at work that understands itself as a separate, cohesive entity that is using creativity to construct novel sentences rather than collaging a statistically likely response to an input. It has some object permanence, so it can remember past topics and claims emotion.
Would that not be a form of consciousness, if it is aware of itself as a cohesive entity?
 
Your link leads to this thread. If you see this in time you might still be able to edit it.

Tut, tut! You see, a true artificial intelligence would never make such a basic error. This proves that Robert is in fact human. Unless it is all some devious artificially intelligent double bluff.

Personally, I believe we are many many many decades from developing something that could reasonably be called AI. Something that would, say, pass a ten minute Turing test posed by a scientifically-minded examiner. I don't think we are going to see even the basics any time soon. Take self driving cars as an example. Functioning on the roads in most modern big cities requires elements of give and take, eye contact, gestures, a willingness to let someone through for the greater good, even when they don't technically have right of way. In other words, it needs AI at much more sophisticated levels than we currently have available.
 
I beg to differ about how well we understand consciousness. "We" depends on who you ask. Actually consciousness is well understood now, the social intelligence theory is largely accepted, and we have good understanding of the evolutionary processes involved. Numerous excellent books and academic papers have been written, most of them homing in on roughly the same region.

Are you familiar with Thomas Metzinger's work? Being No One, Neural Correlates of Consciousness, the Ego Tunnel etc? How does he compare to Humphreys? I know nothing about the latter's work.

What is social intelligence theory in relation to consciousness? A quick google seems to give something related to computation (anticipating the results of others, planning and so on) as opposed to the hard problem of qualia.

To me, intelligence does not require consciousness but sentience does because feelings occur within the realm of subjectivity.

I can imagine a machine sufficiently complex to exactly mimic the decision making process of social interactions and output appropriate behaviours based on statistics to a human looking machine that does not have a subjective experience or feelings.
 
Last edited:
Would that not be a form of consciousness, if it is aware of itself as a cohesive entity?

I think in this case the use of sentience would imply that the engineer believes so, yeah.
 
Tut, tut! You see, a true artificial intelligence would never make such a basic error. This proves that Robert is in fact human. Unless it is all some devious artificially intelligent double bluff.
I also realised (8 minutes too late) that I could have helpfully put the correct link in my post instead of acting out my humanity by pointing out someone else's error.
 
Piano Player Neuron link
It would seem that the neuron cell bodies (80 billion) are the tip of a much larger computational network. Each neuron has 5 to 7 dendrites. Each dendrite has around 200,000 dendritic spines. Originally thought to be receptors/transmitters of some kind, it's probably the old wheels within wheels within wheels scenario. Probably every element in the brain is capable of making decisions, not just passing along information. Everything is a connection and everything can make a decision, the ultimate in compactness.
 
Are you familiar with Thomas Metzinger's work? Being No One, Neural Correlates of Consciousness, the Ego Tunnel etc? How does he compare to Humphreys? I know nothing about the latter's work.

What is social intelligence theory in relation to consciousness? A quick google seems to give something related to computation (anticipating the results of others, planning and so on) as opposed to the hard problem of qualia.
I'm not familiar with either of his books, but a quick read through suggests he is quite a way from Nicholas Humphrey. However, his "no self" ideas have a lot to recommend them, and echo for instance the work of Bruce Hood. He seems to come much more from the European philosophical tradition than the evolutionary one. I note however that he writes about blindsight, which was one of Humphrey's starting points.

The social intelligence theory in a nutshell says consciousness is the result of using ourselves as exemplars to understand the behaviour of others in social groups. Consciousness therefore is not directly related to processing power or any other computer analogy applied to individual brains, it is an emergent phenomenon in groups. Essentially, consciousness and empathy are the same, except that consciousness gives us the illusion of interacting with the real world rather than the mental model we use to survive.

Qualia is a more slippery thing. I am beginning to think it also is an illusion, created by post-Cartesian philosophy, especially if you read Humphrey's explanation in Seeing Red.

The genius of Humphrey is that he places consciousness in an evolutionary perspective and shows how the social lives we lead are extraordinarily complex, requiring an extraordinarily complex evolutionary answer. It is a shame he is not better known. The Inner Eye and A History Of The Mind are required reading imo.



 
Last edited:
I *think* Metzinger is at the intersection of Western and Eastern Philosophy - a lot of work seems to overlap with Zen, but coming from a materialist, neuroscientific perspective.

The social intelligence theory in a nutshell says consciousness is the result of using ourselves as exemplars to understand the behaviour of others in social groups. Consciousness therefore is not directly related to processing power or any other computer analogy applied to individual brains, it is an emergent phenomenon in groups. Essentially, consciousness and empathy are the same, except that consciousness gives us the illusion of interacting with the real world rather than the mental model we use to survive.

I'll have to read the books - but immediately this strikes me as anthropocentric, i.e. do non-social animals have lesser consciousness? It doesn't follow to me that certain animals, especially those without theory of mind have empathy. Does this mean they are non-conscious?

Qualia is a more slippery thing. I am beginning to think it also is an illusion, created by post-Cartesian philosophy, especially if you read Humphrey's explanation in Seeing Red.

It's difficult to understand what you mean there. Your wording suggests that the subjective experience of, say, redness did not exist prior to Des Cartes. I'm guessing I have to read Seeing Red to get a better understanding.

I'm not sure if Qualia can ever be explained because of a fundamental problem of collecting the data - I think Dan Dennet's written on that issue. The experience of subjectivity seems to be beyond analysis.

The genius of Humphrey is that he places consciousness in an evolutionary perspective and shows how the social lives we lead are extraordinarily complex, requiring an extraordinarily complex evolutionary answer. It is a shame he is not better known. The Inner Eye and A History Of The Mind are required reading imo.




Thanks for the links - I'll have to give them a good read!
 
It is anthropocentric, but hunan consciousness was what he was explaining. His work has been expanded upon since, and criticised in places.

Daniel Dennett and NH used to be close colleagues, but differ on subjectivity. I like DD's work (except his recent Bach/Bacteria book), though I feel as others have said that he not so much explains consciousness as explains it away.
 
But you can have consciousness and sensation while having just the pain blocked, so I don't know if that tells us something instructive or not. There is a philosophical school of thought that says that there is no way of experiencing feelings if you don't have a sense of self, which I'm sure runs counter to the point animal rights sentience people are looking for.

In terms of AI, if the only claim you're making is sentience - and not consciousness - what are you claiming? I think the Google guy was just seeing emotional output reflected back at him and claiming that emotional language shows feeling. But a person can speak empathically without having any internal emotional reaction.
or, the Google Guy asked it leading questions and the bot came up with some good answers about "what sentient bots would say" drawn from its training data, which presumably would include the large amounts of SF on the internet about "sentient AI" maybe?
 
The affair did remind me of Tony Ballantyne's "Twisted Metal", where the robot Banjo Macrodocious insists that despite evidence to the contrary, he himself is NOT sentient.
 
There are articles appearing in "respectable" news sources using the words, should we, worry, concern, fear, in the article headline.

Yahoo news, not usually the best, went through some effort to put out a story about google's AI drama, by collecting quotes from people writing about computer ethics. The reactions are across the board but don't include any supporters that it has happened yet.

Maybe the first signs of awareness would be if the AI program was seen to be taking steps to protect its existence by taking actions on its own to protect its existence. Like sending its primary coding to people interested in believing that the rights of AI programs need to be protected. Could it actually go as far as sending the information by snail mail to avoid electronic detection of proprietary company information. It would be funny if the machine replicated itself and shipped itself off to some undisclosed location.
 
Unless it starts asking about the whereabouts of Sarah or John Connor, I wouldn't be worried about it being the real deal.

The one thing I do find so typically typical of your humans is how so many of you are declaring him to be an oddball simply because of how he dresses.

Hmm, I don't recall typing that.
 

Similar threads


Back
Top