Should we even be trying for AI?

I can't add much to the speculation on whether AIs will ever happen; who knows? I think they will eventually but that is only my opinion. No actually it's more probably accurate to say 'my belief.'

However I would like to address one of the concerns. SF has always portrayed AIs as becoming either friends or enemies of humanity. It has been suggested here that they would likely have self-preservation or reproduction 'instincts' just because they are intelligent, and without such instincts it is hard to see why they would become humanity's enemy; without such drives there is no real need for competition for resources etc. and without such competition what logical purpose would being an enemy actually serve? However I can see no reason at all for an AI to develop either instinct (or any 'instincts' for that matter). Our instincts do not come from our intelligence; they massively predate intelligence. They developed biologically through evolution and I suspect (though certainly don't know) are only present because the only organisms that survived are the ones that demonstrated such tendencies (ie the desire to continue existing rather than dying).

AIs on the other hand have no such history of evolution. They will only have such 'instincts' if they are specifically given to them; there is no reason for such instincts to spontaneously appear. We (or other AIs) must decide whether to include them when designing the new AI. Self preservation would probably be a useful trait to give your AI, but a desire to reproduce would not seem particularly useful accept in the limited area of so called Von Neumann Machines and there are distinct dangers with giving an automatically self-replicating machine such a desire. As has frequently been explored in literature. And I think that self-preservation alone would not be enough to cause sufficient competition to create enmity.

Another topic mentioned earlier is the idea of infinitely expandable intelligence. Here, again, I think this is unlikely as I strongly suspect that there will come a point of diminishing returns on constantly just adding extra hardware. There will, I think, come a point where the difficulty of organising the extra hardware takes all the capacity of the extra hardware. Have you ever wondered why we aren't massively more intelligent that we are? It would seem, once intelligence was established, that increasing that intelligence would be an excellent evolutionary trend and yet it appears to have plateaued and that a very long time ago (I believe we are not actually much more intelligent that Cro-Magnon). Consider also the typical proximity of genius and madness. Maybe simply increasing intelligence just doesn't work for us and might also not work for AI?
 
I can't add much to the speculation on whether AIs will ever happen; who knows? I think they will eventually but that is only my opinion. No actually it's more probably accurate to say 'my belief.'

However I would like to address one of the concerns. SF has always portrayed AIs as becoming either friends or enemies of humanity. It has been suggested here that they would likely have self-preservation or reproduction 'instincts' just because they are intelligent, and without such instincts it is hard to see why they would become humanity's enemy; without such drives there is no real need for competition for resources etc. and without such competition what logical purpose would being an enemy actually serve? However I can see no reason at all for an AI to develop either instinct (or any 'instincts' for that matter). Our instincts do not come from our intelligence; they massively predate intelligence. They developed biologically through evolution and I suspect (though certainly don't know) are only present because the only organisms that survived are the ones that demonstrated such tendencies (ie the desire to continue existing rather than dying).

AIs on the other hand have no such history of evolution. They will only have such 'instincts' if they are specifically given to them; there is no reason for such instincts to spontaneously appear. We (or other AIs) must decide whether to include them when designing the new AI. Self preservation would probably be a useful trait to give your AI, but a desire to reproduce would not seem particularly useful accept in the limited area of so called Von Neumann Machines and there are distinct dangers with giving an automatically self-replicating machine such a desire. As has frequently been explored in literature. And I think that self-preservation alone would not be enough to cause sufficient competition to create enmity.

Another topic mentioned earlier is the idea of infinitely expandable intelligence. Here, again, I think this is unlikely as I strongly suspect that there will come a point of diminishing returns on constantly just adding extra hardware. There will, I think, come a point where the difficulty of organising the extra hardware takes all the capacity of the extra hardware. Have you ever wondered why we aren't massively more intelligent that we are? It would seem, once intelligence was established, that increasing that intelligence would be an excellent evolutionary trend and yet it appears to have plateaued and that a very long time ago (I believe we are not actually much more intelligent that Cro-Magnon). Consider also the typical proximity of genius and madness. Maybe simply increasing intelligence just doesn't work for us and might also not work for AI?

We aren't any more intelligent than we are because the brain already takes at least 25% of the body's energy supply, for a start. Also, more intelligence (or at least significantly more) would need a larger brain and hence a larger head, and problems supporting that are already in evidence.

It's also probable that humans are as intelligent as they need to be; even the matter of advancement is taken care of by the natural variations in intelligence. (The top 1% of humanity in terms of intelligence are responsible for nearly all advancement.)

Lastly, in the most recent 50 years or so high intelligence appears to have become contra-survival. Intelligent people tend to have fewer kids; whatever one thinks about the ghetto mothers with 6 kids by 5 different men, or men in similar circumstances with twelve kids none of whom they support or even see very often, I've not heard it said that they are particularly intelligent. On the other hand, people with PhDs probably reproduce at less than replacement rate.

Survival is a multi-generational affair. Take the example I've heard of the cat who lives for 25 years but neglects all her kittens. This cat is not a survivor, from the point of view of evolution.
 
Agree on the survival point, though with a theoretical infinite life span an AI doesn't actually need a 'selfish gene' component to its self preservation. In other words it doesn't really need to reproduce to preserve it's particular take on 'life' as do us genetic creatures.

I also agree to some extent on your comments on limitations to intelligence. However if more intelligence was worth having I'm sure evolution would have found a way; stronger spine to support bigger head etc. I'm not sure the recent trends in the last 50 years or so are very meaningful in evolutionary terms; long before that evolution seems to have given up on increasing levels of intelligence. And obviously that last point of mine was purely speculative, however the idea of diminishing levels of returns is a very real one; it exists already today. Most super computers are constructed by effectively just connecting large number of smaller computers in parallel, but you can't just keep adding more to get more intelligence it just doesn't work and eventually increasing the number of parallel processes becomes self-defeating. I believe the same sort of limitation will affect AIs and indeed may stop them ever getting to that level in the first place unless we can come up with a technology that is comparably efficient to our neurons.
 
Also, more intelligence (or at least significantly more) would need a larger brain
But you are guessing.
There is a lot of evidence that intelligence (whatever it might be exactly, we haven't got a good definition), has only a tenuous link to brain size.
Where is there any proven correlation between brain size and intelligence in healthy humans with similar educational and cultural background?
Is there much difference in intelligence between a Chimp, Elephant, Dolphin and Whale, does it correlate to brain size?
Why is a Corvid apparently much "smarter" than many other birds? Compare chicken, Ostrich, Corvid (Rook, Crow etc), cat, dog, horse, goat and monkeys.

I've not heard it said that they are particularly intelligent
You are confusing Education and Environment with intelligence. Unless they have brain damage from eating lead paint or mercury etc, there is no evidence that such people are less intelligent. Making bad choices due to poor upbringing or lack of eduction isn't evidence of lack of intelligence.
Are women on average less intelligent because on average they have smaller brains? (If in fact they really do, though the claim is about 13% smaller)
http://www.nhs.uk/news/2014/02February/Pages/Mens-and-womens-brains-found-to-be-different-sizes.aspx
However the media’s preoccupation with brain size is probably something of a distraction. The link between brain function and brain structure or size is still not clearly understood; so we can’t reliably conclude from this study how the differences in brain size influence physiology or behaviour.

Edit
Also
http://gender.stanford.edu/news/2011/is-female-brain-innately-inferior
http://blogs.discovermagazine.com/neuroskeptic/2013/09/25/are-mens-brains-just-bigger/
 
Last edited:
Agree with you there Ray, though I guess for a really significant increase in intelligence some extra 'hardware' would probably be useful. However I don't want to derail the thread onto human intelligence I was merely posing the comparison that we humans haven't gone into a runaway cycle of increasing intelligence and I suspect that for similar reasons AIs probably wouldn't either. Whether those reasons be diminishing levels of return on increase in hardware, or just that there is no need for more.
 
But AI is impossible, inherently, till we actually can properly describe what intelligence is. Understanding ourselves and possibly some animals (though they don't respond usefully to questions) is the starting point. Not the present so called "expert systems", current "evolutionary software", "neural networks" and all existing computer AI research. None of it is really about AI at all. But simulating responses to stimulations, using databases, and solving domain specific problems.

The starting point for any computer system or program is a clear specification. Give me one for Intelligence and I'll have a demo that works on any windows PC, probably in a few months. Computer "power" or extra hardware is irrelevant! If you knew how to do it, the inherent property of an artificial is system is that ALL would be equally "smart", only the response time would vary with "computer power".
Response time is a poor indication of "intelligence".

Edit:
Another myth "Intelligence is an emergent property of complexity".
There is no evidence for that at all. It's pure hand waving / wishful thinking.
Most of the volume of brains appears to be dedicated to process / control of autonomous systems, not decision making, creativity or problem solving.
 
Last edited:
I agree with the specification aspect and yes we understand it about as well as we understand sentience which of course is so often automatically associated with AI but is actually a whole other topic.

And also the size thing; as you say most brain capacity is concerned with the running the body not higher thinking. If size is all it takes than sperm whales should be the smartest creatures on Earth.
 
yes we understand it about as well as we understand sentience
sentience: Cogito ergo sum
A very knotty problem. Also known as "self awareness". It's not clear at all to me how Sentience and Intelligence is related. Perhaps creatures of very little intelligence can be "self aware" (which is slightly testable, which is why I use the term rather than sentience). But can something be intelligent with no "self awareness"? I don't know.

You can fake basic "self awareness" in a machine or even on just a "chat bot" at the level where an animal appears to understand it is looking at itself in a mirror. But is a cat less self aware than an elephant (most cats will fail, most elephants pass the blob of colour on body + mirror test)? Cats are not much interested in visual images that lack smell and noise, dogs OTH will react through double glazing or video at an animal, though they should only have to pay a B&W TV licence in UK (abolished here, here we only have one kind of domestic TV licence).

You can fake a lot of emotional responses in a chat-bot, but that's no use for an AI, or indeed anything unless it's a humanoid sex-bot or "pet companion". "Companion" simulators at the level of a pet can be useful, and exist marketed for lone old people in Japan, but actually have no intelligence at all.

People are good at fooling themselves, if they want to be, which is the flaw on much research of brain, animal behaviour (people either see us as machines, or animals or give animals "human" motivations).
 
However I would like to address one of the concerns.
Your post was very interesting and made me think differently, however, in many circumstances where we might use intelligent machines in the future - places inhospitable to humans like other planets, nuclear reactors, bottom of the ocean - then it might be a good idea to give them a survival instinct and the ability to reproduce too. Much easier to send a single machine and have it multiply itself at the work site and if it has no sense of danger or any sense of being damaged then it isn't likely to last very long. So, I agree that it isn't a prerequisite for an AI but it is possible and maybe even likely, especially if we are creating Androids to replace humans in dangerous tasks.
 
Yes I think I'd agree with you Dave, however if those instincts are programmed by us rather than just a corollary of intelligence then maybe they be can tuned so they don't compete with our own instincts! It's that self replicating one that always worries me, and provides SF authors with such excellent disaster fuel.
 
But AI is impossible, inherently, till we actually can properly describe what intelligence is. Understanding ourselves and possibly some animals (though they don't respond usefully to questions) is the starting point. Not the present so called "expert systems", current "evolutionary software", "neural networks" and all existing computer AI research. None of it is really about AI at all. But simulating responses to stimulations, using databases, and solving domain specific problems.

The starting point for any computer system or program is a clear specification. Give me one for Intelligence and I'll have a demo that works on any windows PC, probably in a few months. Computer "power" or extra hardware is irrelevant! If you knew how to do it, the inherent property of an artificial is system is that ALL would be equally "smart", only the response time would vary with "computer power".
Response time is a poor indication of "intelligence".

Edit:
Another myth "Intelligence is an emergent property of complexity".
There is no evidence for that at all. It's pure hand waving / wishful thinking.
Most of the volume of brains appears to be dedicated to process / control of autonomous systems, not decision making, creativity or problem solving.

Saying that AI is impossible until we can properly describe what intelligence is necessarily(IMHO) implies that humans can't be intelligent; unless, that is, one believes (as many do) that human intelligence or at least the capacity for it to develop (newborn babies aren't all that intelligent) was designed in by another, intelligent, entity. Given that humans are intelligent, it must be possible for an assemblage of computing devices to become intelligent if that isn't what you believe.

I was under the impression that the doctrine of vitalism was dead; apparently not.

As for the irrelevance of computer processing speed; well, I disagree. As an example, this time nothing to do with AI, it's been possible in terms of theory, for many years (probably since the early 1970s at least) to create a computerised prediction of the weather three days in advance. Unfortunately, at that time, the prediction would be of rather limited use because it would take a run time of ten years or so to generate it.

The point is that sapience needs the response time to be short enough to actually respond to the situation before it changes again.
 
Given that humans are intelligent, it must be possible for an assemblage of computing devices to become intelligent
There is ZERO logic in that statement. We don't really know how people got intelligent, nor what exactly intelligence is. The only way we can have an Artificial Intelligence is to design. No mechanical Intelligence can self emerge. Computer chips are 100% purely electronic miniaturised implementations of mechanical mechanisms. There is no property of them we didn't implement by design.

I've been programming casually since 1969, seriously since 1981. Designing computers since 1980.
Without engineers and programmers and a specification you just have a bunch of chips. Computers are more like mechanical calculators or a clockwork automata than ANYTHING biological.
There are zero self emergent properties related to computers. The hardware and software has never ever evolved on its own. It's all 100% developed by intelligent and educated humans.

Computers are purely deterministic calculating machines. You can make a slower copy of any computer in theory with mechanical relays. Or even purely mechanical parts.

As an example, this time nothing to do with AI, it's been possible in terms of theory, for many years (probably since the early 1970s at least) to create a computerised prediction of the weather three days in advance. Unfortunately, at that time, the prediction would be of rather limited use because it would take a run time of ten years or so to generate it.
Yes, controlling a missile or a walking robot needs very rapid response. As you say that is nothing at all to do with AI. AI at even one year response time for something that takes us a couple of minutes would STILL be AI. Complexity and Power/Performance is irrelevant.
The fact there are applications that need supercomputers to be useful in REAL TIME, is irrelevant. You could PROVE a weather computer program performance that takes 10 years or 6 months for the 3 day forecast by knowing what happened over the 3 days. Then it's "only" engineering to speed it up.

Maybe we will have AI someday, but not by accident, not because of a more "powerful" computer, not because of a new programming language, not as a side effect of complexity. If possible at all it will be three steps.
1) Realisation of what Intelligence is.
2) Create a design to embody A.I.
3) Implement it.
If we figure it out the first version will be buggy and unstable. Then it will be improved. Then features will be added. It will peak in usability and performance and then get worse due to marketing.
Someone will produce an open source version that initially will be awkward to bootstrap.

The point is that sapience needs the response time to be short enough to actually respond to the situation before it changes again.
No, not at all in a proof of concept demo.
Self Awareness, AI, Sapience etc in a machine only has to work at all. You're confusing a lab proof of concept with a commercially viable product.
A) Proof of Concept (we have no idea how to get there)
B) Commercial products with decent response time.

If we can do (A) at all, (which can't be proved or disproved) then eventually and definitely we can do (B).

I read SF as a Kid from mid 1960s and then took up programming and computers partly with a goal of A.I. My first real short story at school was about an A.I.
A.I. is still firmly in the realm of soft SF /Fantasy.
 
Last edited:
sentience: Cogito ergo sum
A very knotty problem. Also known as "self awareness". It's not clear at all to me how Sentience and Intelligence is related. Perhaps creatures of very little intelligence can be "self aware" (which is slightly testable, which is why I use the term rather than sentience). But can something be intelligent with no "self awareness"? I don't know.

I think it can - a kind of autistic intelligence. The consciousness problem is the knotty problem - the Hard Problem as philosophers tend to call it - but it is the most interesting of all. For me, the question of intelligence is almost a mathematical, indeed, trivial problem. Consciousness (sentience, if you like) is the biggie.
 
I find it interesting that we ask this question on a fantasy and sci-fi site when I would guess that many here can realise that the answer is already yes. That we will create a form of artificial intelligence.

Why - because for the very same reason that some people own pets; that others talk to their plants; that heroes have loyal mounts and fantastic familiars. Humanity is in a sense a very lonely species.

We can communicate with ourselves and with each other, we can communicate with many other species to a lesser or greater degree, but we are still the only ones like ourselves on the planet. As such I think that we have a drive within ourselves in general to fill that gap and that AI is a way we will aim to fill it.

Maybe from different angles - some will be pushing for better and better computers; others might go at it from biology (super smart labrats!). But I think the end result is the same, we will seek to create a companion for ourselves - one with function, but one to fill that void.



Interestingly I suspect any AI is likely going to be a merging of biological and technological advance. Indeed it would not surprise me if within the next few decades we might see a rise of "bio-computers". Indeed that could be the next huge leap in computing power.
 
I thought everything could be described by mathematics
Not today, sadly. Perhaps we don't know enough.

That we will create a form of artificial intelligence.
We will try, at present, despite hype, we not made any progress at all. None. It's a good idea to try.
There is no assurance that we will ever succeed.

we might see a rise of "bio-computers"
What might they be?
If you mean computer components made from biological elements, they are too large, too short lived and too slow.
Biological systems are very slow and achieve results from massive parallelism and self repair.
Computers and biological systems have almost nothing in common.

Interestingly I suspect any AI is likely going to be a merging of biological and technological advance
You could have an "A.I." in the sense of editing existing genetic material and breeding a mutant creature good at some task (drug searchs?). But no artificially engineered genetics based creature can do problems or tasks in the sense a computer does.
The point is what task or problem are we trying to solve that can't be done by existing creative people or programmed computers. People are really easy to make, though 20 year latency from "order placement".
 
Last edited:
When we put our minds to it, we do anything, even create An A I. Is it a good idea ? The only way to find out is to turn it on once we've created it . Of course, before turning it on , it might be a good idea to put in safeguards. Like Asimov's three Laws. :)
 
Last edited:

Similar threads


Back
Top