Should we even be trying for AI?

The 'quantum leap' party has my vote
Yes I agree, though obviously I can't see ANY evidence that we are any nearer AI than in 1947. There is no reason to assume that "the Three laws of robotics" apply, they are fiction to allow examining how to break them and autonomous devices already break them. Conversely all of the dystopian SF about AI is simply fantasy, more so than Iain M. Banks "paradisical" AI.
All AI stories are at the level of fairies, wizards, etc. As a programmer for over 30 years and involved in so called AI research, I can say ANYTHING with AI in it in the media or even University Computer depts is jargon. No intelligence at all, just applied programming and databases.
 
3. Research into human brains has shown that there is some kind of switch in their development that makes children about four or five develop self-awareness, consciousness, call it what you will. If we can work out what that switch is based on, then we can model it in computers.

It's not a switch and it all happens around 18 months.
 
Let's assume for the moment that strong AI is possible, probably by some sort of self-organising and perhaps even pseudo-evolutionary process. I'm inclined to think that it is, particularly if one is not religious; for an example of a hugely complex network of nanomachines and computing devices with sapience, look in the mirror. Which means it's possible.

And that comes to the question of whether we (humanity or some subsection of it) should try to build AI at all - assuming that we actually have the choice; commercial and other pressures may force us into it. The latest mobile phones are a most unreasonable facsimile of intelligence, and I've seen video of robots with the ability to generalise from the particular, albeit in a rather crude and limited way. (Deducing correctly that an object of a different shape, not seen before, is a chair was the demo I saw.)

I'm inclined to believe that true AI is going to have goals and motivations of its own, ones we didn't put there; self-preservation instincts would seem to be inevitable. Perhaps also an instinct to reproduce. There is also the issue of runaway intelligence growth; unlike humans, computers and robots could plug in extra hardware.

And, of course, the robots will need resources of one sort or another, probably many sorts, which means they will be to some extent in competition with us and our dumb (sub-sapient) hardware for said resources.

Should we even be trying? And can we stop ourselves? After all, a nation or corporation which has strong AI helping it with planning has an advantage...
It already does exist. In fact my TV got married last week. The reception was amazing.
 

Similar threads


Back
Top