I can't add much to the speculation on whether AIs will ever happen; who knows? I think they will eventually but that is only my opinion. No actually it's more probably accurate to say 'my belief.'
However I would like to address one of the concerns. SF has always portrayed AIs as becoming either friends or enemies of humanity. It has been suggested here that they would likely have self-preservation or reproduction 'instincts' just because they are intelligent, and without such instincts it is hard to see why they would become humanity's enemy; without such drives there is no real need for competition for resources etc. and without such competition what logical purpose would being an enemy actually serve? However I can see no reason at all for an AI to develop either instinct (or any 'instincts' for that matter). Our instincts do not come from our intelligence; they massively predate intelligence. They developed biologically through evolution and I suspect (though certainly don't know) are only present because the only organisms that survived are the ones that demonstrated such tendencies (ie the desire to continue existing rather than dying).
AIs on the other hand have no such history of evolution. They will only have such 'instincts' if they are specifically given to them; there is no reason for such instincts to spontaneously appear. We (or other AIs) must decide whether to include them when designing the new AI. Self preservation would probably be a useful trait to give your AI, but a desire to reproduce would not seem particularly useful accept in the limited area of so called Von Neumann Machines and there are distinct dangers with giving an automatically self-replicating machine such a desire. As has frequently been explored in literature. And I think that self-preservation alone would not be enough to cause sufficient competition to create enmity.
Another topic mentioned earlier is the idea of infinitely expandable intelligence. Here, again, I think this is unlikely as I strongly suspect that there will come a point of diminishing returns on constantly just adding extra hardware. There will, I think, come a point where the difficulty of organising the extra hardware takes all the capacity of the extra hardware. Have you ever wondered why we aren't massively more intelligent that we are? It would seem, once intelligence was established, that increasing that intelligence would be an excellent evolutionary trend and yet it appears to have plateaued and that a very long time ago (I believe we are not actually much more intelligent that Cro-Magnon). Consider also the typical proximity of genius and madness. Maybe simply increasing intelligence just doesn't work for us and might also not work for AI?
But you are guessing.Also, more intelligence (or at least significantly more) would need a larger brain
You are confusing Education and Environment with intelligence. Unless they have brain damage from eating lead paint or mercury etc, there is no evidence that such people are less intelligent. Making bad choices due to poor upbringing or lack of eduction isn't evidence of lack of intelligence.I've not heard it said that they are particularly intelligent
However the media’s preoccupation with brain size is probably something of a distraction. The link between brain function and brain structure or size is still not clearly understood; so we can’t reliably conclude from this study how the differences in brain size influence physiology or behaviour.
sentience: Cogito ergo sumyes we understand it about as well as we understand sentience
Your post was very interesting and made me think differently, however, in many circumstances where we might use intelligent machines in the future - places inhospitable to humans like other planets, nuclear reactors, bottom of the ocean - then it might be a good idea to give them a survival instinct and the ability to reproduce too. Much easier to send a single machine and have it multiply itself at the work site and if it has no sense of danger or any sense of being damaged then it isn't likely to last very long. So, I agree that it isn't a prerequisite for an AI but it is possible and maybe even likely, especially if we are creating Androids to replace humans in dangerous tasks.However I would like to address one of the concerns.
But AI is impossible, inherently, till we actually can properly describe what intelligence is. Understanding ourselves and possibly some animals (though they don't respond usefully to questions) is the starting point. Not the present so called "expert systems", current "evolutionary software", "neural networks" and all existing computer AI research. None of it is really about AI at all. But simulating responses to stimulations, using databases, and solving domain specific problems.
The starting point for any computer system or program is a clear specification. Give me one for Intelligence and I'll have a demo that works on any windows PC, probably in a few months. Computer "power" or extra hardware is irrelevant! If you knew how to do it, the inherent property of an artificial is system is that ALL would be equally "smart", only the response time would vary with "computer power".
Response time is a poor indication of "intelligence".
Edit:
Another myth "Intelligence is an emergent property of complexity".
There is no evidence for that at all. It's pure hand waving / wishful thinking.
Most of the volume of brains appears to be dedicated to process / control of autonomous systems, not decision making, creativity or problem solving.
There is ZERO logic in that statement. We don't really know how people got intelligent, nor what exactly intelligence is. The only way we can have an Artificial Intelligence is to design. No mechanical Intelligence can self emerge. Computer chips are 100% purely electronic miniaturised implementations of mechanical mechanisms. There is no property of them we didn't implement by design.Given that humans are intelligent, it must be possible for an assemblage of computing devices to become intelligent
Yes, controlling a missile or a walking robot needs very rapid response. As you say that is nothing at all to do with AI. AI at even one year response time for something that takes us a couple of minutes would STILL be AI. Complexity and Power/Performance is irrelevant.As an example, this time nothing to do with AI, it's been possible in terms of theory, for many years (probably since the early 1970s at least) to create a computerised prediction of the weather three days in advance. Unfortunately, at that time, the prediction would be of rather limited use because it would take a run time of ten years or so to generate it.
No, not at all in a proof of concept demo.The point is that sapience needs the response time to be short enough to actually respond to the situation before it changes again.
sentience: Cogito ergo sum
A very knotty problem. Also known as "self awareness". It's not clear at all to me how Sentience and Intelligence is related. Perhaps creatures of very little intelligence can be "self aware" (which is slightly testable, which is why I use the term rather than sentience). But can something be intelligent with no "self awareness"? I don't know.
Send me the formula and spec and I'll implement it.the question of intelligence is almost a mathematical, indeed, trivial problem.
Not today, sadly. Perhaps we don't know enough.I thought everything could be described by mathematics
We will try, at present, despite hype, we not made any progress at all. None. It's a good idea to try.That we will create a form of artificial intelligence.
What might they be?we might see a rise of "bio-computers"
You could have an "A.I." in the sense of editing existing genetic material and breeding a mutant creature good at some task (drug searchs?). But no artificially engineered genetics based creature can do problems or tasks in the sense a computer does.Interestingly I suspect any AI is likely going to be a merging of biological and technological advance