Robot evolution

The problem with lying, they say, is trying to remember the lies -- a robot wouldn't have the problem? A perfect memory?
There are a couple of challenges. One, memory storage is not infinite -- eventually something needs to be disposed of or overwritten. Two, linkages are not infinite. How is the robot supposed to correlate a current statement or question with a past lie?

Artificial intelligence still has a long way to go, and technologists keep lowering the bar on what qualifies as artificial intelligence.
 
How is the robot supposed to correlate a current statement or question with a past lie?
Well this is where a human liar gets caught out. Probably a computer would have a better recollection? They're better at chess than human beings?
 
There will come a time with the evolution of AI where the distinguishment between human and computer conciousness will become almost impossible. At some stage I think it's inevitable that there will be an argument whether a mind that is capable of rational thought and recognising itself for what it is, of being able to grow and develop without human assistance, would need to be regarded as rights and privileges above those of an inanimate object.
 
There will come a time with the evolution of AI where the distinguishment between human and computer conciousness will become almost impossible. At some stage I think it's inevitable that there will be an argument whether a mind that is capable of rational thought and recognising itself for what it is, of being able to grow and develop without human assistance, would need to be regarded as rights and privileges above those of an inanimate object.


I think artificial intelligence will develop an entirely different and alien form of consciousness. It'll have perceptions and modes of thought completely different from ours. If it has emotions they'll be alien. I'm sceptical if it will have the internal illusion of a being that experiences the world as we do - or if it will be a Chinese box with the appearance of being conscious.

It'll be likely to understand and predict our behaviour, but I think we'll struggle to comprehend how it thinks - in the same way we don't understand why a neural net can do what it does.
 
There will come a time with the evolution of AI where the distinguishment between human and computer conciousness will become almost impossible. At some stage I think it's inevitable that there will be an argument whether a mind that is capable of rational thought and recognising itself for what it is, of being able to grow and develop without human assistance, would need to be regarded as rights and privileges above those of an inanimate object.

Not sure about that pm.

AI is based on computer technology. A computer can do the following: add, subtract, multiply, divide, perform input / output, compare two values and divert its program based on the result. That’s it. No more, no less.

Whatever it appears to be doing it’s just doing one of those things but incredibly quickly. There’s no function in there that allows for self recognition, self awareness or intelligence. It just remains an inanimate object appearing to do human-like things.
 
Not sure about that pm.

AI is based on computer technology. A computer can do the following: add, subtract, multiply, divide, perform input / output, compare two values and divert its program based on the result. That’s it. No more, no less.

Whatever it appears to be doing it’s just doing one of those things but incredibly quickly. There’s no function in there that allows for self recognition, self awareness or intelligence. It just remains an inanimate object appearing to do human-like things.
So AIs will brick themselves as soon as they figure out that humans do not compute.
 
Not sure about that pm.

AI is based on computer technology. A computer can do the following: add, subtract, multiply, divide, perform input / output, compare two values and divert its program based on the result. That’s it. No more, no less.

Whatever it appears to be doing it’s just doing one of those things but incredibly quickly. There’s no function in there that allows for self recognition, self awareness or intelligence. It just remains an inanimate object appearing to do human-like things.


I think that computers can do very well to imitate life in many respects; they can learn from experience, they can improve and they can come to logical conclusions, much the same as humans can. Does a computer know it's a computer? I don't know, but if it could would that make it self aware?

I think the one thing that computers can't - and may never be able to - fully replicate is human emotion. Which is probably for the best , considering that we put our lives in their hands.
 
I think that computers can do very well to imitate life in many respects; they can learn from experience, they can improve and they can come to logical conclusions, much the same as humans can. Does a computer know it's a computer? I don't know, but if it could would that make it self aware?

I think the one thing that computers can't - and may never be able to - fully replicate is human emotion. Which is probably for the best , considering that we put our lives in their hands.

‘Knowing’ isn’t one of the commands available to a computer, pm.

‘Learning’ is accumulating / modifying data so that, in future when one or both of the values that are compared a different program branch will result. Likewise with logical conclusions.

All these things are subject to the basic instruction set. There’s nothing else.

The give away is that computers can’t generate random numbers.
 
A computer can do the following: add, subtract, multiply, divide, perform input / output, compare two values and divert its program based on the result.
Consider that a neuron cannot add, subtract, multiply, or divide. The CPU of a computer is not necessarily a restriction on its capabilities.

I think that computers can do very well to imitate life in many respects; they can learn from experience, they can improve and they can come to logical conclusions
I think saying the computers can learn from experience is still a little bit of a stretch. Currently, the most common level of AI is Machine Learning. In a learning phase, the computer is presented with (often) a series of images and the expected result of whether the image meets a specific criteria. The computer designs its own algorithm(s) for determining the result and when the computer hits a certain threshold of correct responses, it is considered trained. At this point, it is put into actual use and no longer adapts its algorithm. If it provides a correct response in a certain situation, it will repeatedly provide a correct response in that situation. Likewise, if it provides an incorrect response in a situation, it will repeat the incorrect response whenever that situation occurs. It no longer adapts nor learns from experience.

In applications such as facial recognition, programs provide a frustrating mix of results. In some scenarios, the computer out performs human operators, while in other scenarios, it under performs. One of the reasons facial recognition is so controversial is that it generates a high percent of false positive matches for non-caucasian faces.

Computer programs lack the ability to determine their own goals and their own success criteria for meeting those goals. AI does accomplish some things that make one wonder, 'How did it do that?' There are also aspects of autonomy that we do not know how to provide for a computer and without those, it will not be what I what call an intelligent being.
 
The difference, Wayne, is that a neuron and a CPU aren’t comparable. A brain has billions of neurons, the CPU is all the computer has - there is nothing else for it to use - and it just has a basic instruction set.
 
Not sure about that pm.

AI is based on computer technology. A computer can do the following: add, subtract, multiply, divide, perform input / output, compare two values and divert its program based on the result. That’s it. No more, no less.

Whatever it appears to be doing it’s just doing one of those things but incredibly quickly. There’s no function in there that allows for self recognition, self awareness or intelligence. It just remains an inanimate object appearing to do human-like things.
Thanks for the explanation @mosaix

It's something I had not properly known and something I won't forget

Back to the OP: why would a robot lie to a human?
 
The difference, Wayne, is that a neuron and a CPU aren’t comparable. A brain has billions of neurons, the CPU is all the computer has - there is nothing else for it to use - and it just has a basic instruction set.

Musk's company supposedly has developed a rural interface that in theory , should be able to do just that.
 
Musk's company supposedly has developed a rural interface that in theory , should be able to do just that.
The rural interface is to spread put. Communication between barns is too slow.
 
The difference, Wayne, is that a neuron and a CPU aren’t comparable. A brain has billions of neurons, the CPU is all the computer has - there is nothing else for it to use - and it just has a basic instruction set.
True. It's not even about the number of neurons, it's about the interconnectivity of those neurons.
 
The difference, Wayne, is that a neuron and a CPU aren’t comparable. A brain has billions of neurons, the CPU is all the computer has - there is nothing else for it to use - and it just has a basic instruction set.
I believe comparing a CPU to a neuron is an oversimplification. If comparing counts, then a slightly better comparison would be number of transistors versus number of neurons. Because of the different allocations of functionality, one should include memory devices and I/O (input/output) devices in addition to CPUs for the computer representation and sensory cells and cells related to reflexive (not requiring brain interaction) responses in addition to cells in the brain. This is necessary because, though the functionalities of transistors in a computing system are understood, the division of functionalities within living creatures is not well understood.

Complicating the matter is that most of the transistors in a computing system are run in a binary manner while cells and neurons operate in an analog manner. This requires transistors to operate in parallel to represent analog signal levels. To replicate human capabilities, often an 8 bit wide, 256 level, representation is used. Higher widths allow computer sensory inputs that exceed human capabilities.

Another complicating factor is that computer applications will have a limited focus or scope, while human activities cover a broad range. There is limited ability to correlate specific neurons to specific activities and identifying which neuron does what is key to being able to compare the two.

The bottom line is that it is not possible to do a side by side count of things to compare humans and computers. What should be observable is that humans do a wide range of things relatively well and that computers are able to do more and more specific things better than the general human population and sometimes exceeding the capabilities of expert humans. It is also true that in newer technologies, humans no longer control (or understand) how computers reach the conclusions that they do.
 
Quite frankly I'd rather hire humans. They need jobs, need money, and need things to do. Why are we so intent on replacing ourselves? When we have no gods, we make new ones, it seems.

FC93F5B4-F487-4683-81D8-C4DA810FAF85.jpeg
 
Seeing this thread just now

Made me realise that there are numerous other robot threads around this forum, and perhaps I should link them here.










 

Back
Top