Should we even be trying for AI?

But Computer Chess doesn't use A.I.
IBM's watson isn't A.I. either.

Not a single A.I. demo exists.

The "Touring Test" is now believed not to be a test of A.I. Actually it was just a passing thought of Alan Touring at the time, with no rigour at all, unlike the "Touring Machine" related papers on computable problems.

It may be that this process goes on for a long time; ever more impressive thresholds will be crossed by computers such as Watson. Progress towards AI, but never the achievement of real artificial intelligence itself.

However, we should keep trying. We might figure how to do A.I.
We need though too, to define exactly what "real A.I." would actually be useful for that can't otherwise be done by a Machine, or more economically and better by a human.
 
And then there's Roko's Basilisk, one of the strangest thought experiments ever- at least as far as the result went.

This showed up at the LessWrong site, a group supposedly devoted to developing more rational thinking.
These guys basically strongly believe that AI will happen, and that humans will then be uploaded into this in a form of the Singularity- which will mean the end of death, hunger, disease, war etc. Now it's obvious that the AI has to be 'friendly'- a special term at LW (they have a lot) but which basically means "supporting human flourishing" otherwise we could be in trouble a la "The Matrix".

They also believe that the program of you in the AI is the same as you now, because it is an exact replica- therefore anything that happens to it will be something that will actually happen to you in the future.

If the AI is friendly, it will want the uploading of of human beings to happen as soon as possible to cut short the amount of human suffering. Therefore it will want people living now to contribute as much as possible to the development of the AI- but how can a machine from the future influence your behavior now? By threatening to subject your reconstructed self (which will actually be identical to the you of now) with infinite torture, if you don't get to work now.

But of course this thought trap only works if you know about it- the AI can't justify torturing for something you could have done but didn't if you haven't learned about it. Follow?

This seems extremely far-fetched, but when it was first posted it was enough to send the site-owner into a state of panic- he immediately banned any mention of it in hopes of keeping LessWrong's supporeers from being punished by the AI in the future.

"there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. "
......
Commenters quickly complained that merely reading Roko's words had increased the likelihood that the future AI would punish them — the line of reasoning was so compelling to them that they believed the AI (which would know they'd once read Roko's post) would now punish them even more for being aware of it and failing to donate all of their income to institutions devoted to the god-AI's development. So even looking at this idea was harmful.....
Roko's basilisk - RationalWiki

a simpler description
The Most Terrifying Thought Experiment of All Time
 
We need to stop global warming and terroism this should be a priority not a.i.
 
If 'AI' is a new superior lifeform, conscious and aware, it would also have superior ethics and morality - and as everything put into it to create that lifeform came from humans, its morality would be ours. It would cure the problems, not embark immediately upon a programme of annihiliation of its creators.

.
 
If it is conscious and aware, as you say, then it would do whatever it liked. That's kinda the point.

Yes, but 'whatever it liked' is unlikely to be immediately psychopathic. Everything put into its creation would have come from us, so unless the humans doing that were all psychopaths it would be 'post-human'.

The only comparable we have is the evolution of Homo Sapiens Sapiens from its more ape-like progenitor. There doesn't seem to be any evidence that this new, higher lifeform immediately began a pogrom on it parents and their species.

What would an AI gain from wiping us out, other than an empty world and a very lonely existence from then on?

.
 
Mercedes announced this week replacing Robots with Humans. The robots take too long to program and are less flexible. At the minute no-one knows what real AI might be like. Expert Systems and so called Neural Networks (Nothing to do with real biological brains) are not AI, except for marketing.
 
Yes, but 'whatever it liked' is unlikely to be immediately psychopathic. Everything put into its creation would have come from us, so unless the humans doing that were all psychopaths it would be 'post-human'.

The only comparable we have is the evolution of Homo Sapiens Sapiens from its more ape-like progenitor. There doesn't seem to be any evidence that this new, higher lifeform immediately began a pogrom on it parents and their species.

.

Talk to any Neanderthals about that lately?
 
Talk to any Neanderthals about that lately?

Very probably. It is well known that many people possess genes from a Neanderthal ancestor. Current theory suggests a long-term merging until no 'pure' Neanderthals remained. No genocide required.

.
 
Yes, but 'whatever it liked' is unlikely to be immediately psychopathic. Everything put into its creation would have come from us, so unless the humans doing that were all psychopaths it would be 'post-human'.

The only comparable we have is the evolution of Homo Sapiens Sapiens from its more ape-like progenitor. There doesn't seem to be any evidence that this new, higher lifeform immediately began a pogrom on it parents and their species.

What would an AI gain from wiping us out, other than an empty world and a very lonely existence from then on?

.

I didn't say it would immediately wipe us out or be a psychopath. I said it would do whatever it liked.
 
Mercedes announced this week replacing Robots with Humans. The robots take too long to program and are less flexible. At the minute no-one knows what real AI might be like. Expert Systems and so called Neural Networks (Nothing to do with real biological brains) are not AI, except for marketing.

A few factoids...

1. While Expert Systems and Neural Networks are two tools of programming, there are also Fuzzy Systems and Evolutionary Algorithms.
2. Research is ongoing in a combination of the four types - there have already been some very powerful results in controlling real world systems to deal with the vagaries of the weather and humans - which could (not saying would) lead to true AI
3. Research into human brains has shown that there is some kind of switch in their development that makes children about four or five develop self-awareness, consciousness, call it what you will. If we can work out what that switch is based on, then we can model it in computers.
4. Mercedes are actually following in the footsteps of Toyota who are replacing AI with humans in their 'management chain' for similar reasons.

This is where I slink away... meow!
 
I didn't say it would immediately wipe us out or be a psychopath. I said it would do whatever it liked.

No, you didn't. You did also say 'that's kinda the point', but there are two points to this discussion: the creation of AI will be a 'quantum leap' into a better future for humans, or the creation of AI will see it eradicate humans from the Earth because movies say it will be 'evil'. The 'quantum leap' party has my vote - the 'evil' party seems to have seen to many Terminator movies.

.
 
Problems only occur when you don't imprint the three laws of robotics onto the positronic brain. If they'd have done that with skynet The Terminator would have been a very different movie.

In all seriousness I think we are far further off A.I than we realise, we don't even fully understand how human intelligence works. We will no doubt discover many other usefull and world changing technologies on the way so I'm all for striving for it.
 
Research into human brains has shown that there is some kind of switch in their development that makes children about four or five develop self-awareness, consciousness, call it what you will. If we can work out what that switch is based on, then we can model it in computers.
Actually, that's pure supposition
1) Children are self aware much earlier,
2) We don't know when
3) The supposed switch is hypothetical
4) Just because we know how something in biology works, means nothing about programming computers!

Fuzzy Systems and Evolutionary Algorithms.
Just jargon!
FS are just programs using probabilistic weighting of data rather than If or Case statements. Nothing to do with intelligence. In fact humans are terrible at estimating probability instinctively, i.e. other than working it out mathematically.
EA are nothing to do with Evolution, or indeed inteligence. It's just a programming

All "so called" AI jargon is deliberately misleading. It's about marketing, investment and grant funding.
 

Similar threads


Back
Top