Anomalies of AIs in modern Science Fiction

Vertigo

Mad Mountain Man
Supporter
Joined
Jun 29, 2010
Messages
8,641
Location
Scottish Highlands
I'm new here so I apologise if this is an old much discussed topic.

Having just finished Neal Asher's Gridlinked I find myself returning again, as I often do, to wondering about AIs in Science Fiction. I have never really had an opportunity to discuss this and thought some of you good folk here might be interested.

Many authors postulate AIs in one form or another and I am particularly interested in sentient AIs and the problems they present for me. Asher just has his AIs, Banks has his Minds, Simmons has the TechnoCore (if I remember it correctly), Hamilton dodges the issue by having his AI's throttled before sentience emerges barring one sentient example that is presented as a thoroughly enigmatic character. In many cases they seem to end up (not unreasonably) pretty much running all human affairs for us.

Most if not all of these AIs have processing abitilies so far beyond humans as to be virtually God like. Given sentience and reasonably presumed advances in processing powers this is not unreasonable. However I can think of no modern authors putting forward any kind of articificial constraint on these AIs akin to Asimov's Laws of Robotics. Given that I simply struggle to understand why such intelligences would be interested in or tolerant of the tirival goings on of their sluggish and limited human creators, let alone still seem to be dependant on them in way.

Then you have the likes of Asher's androids and Banks' drones. Again they are sentient AIs but this time in bodies far more mobile, faster and tougher than human bodies and yet it seems they still need the humans as well. Fequently they are paired up, for example Banks' Skaffen-Amtiskaw and Diziet Sma, or Asher's Golem and human Samarkind soldiers. The androids'/drones' abilities seem so far above those of the humans that you are really left wondering why bother with the humans at all.

Bottom line - why would such AI's bother with or even tolerate us; we seem to just get in the way and mess things up! Actually in the case of Simmons' TechnoCore he seems to be thinking along the same lines, with his AIs (or at least some of them) effectively becoming sinister enemies of humanity, and I guess you could also site examples like Matrix or Terminator.

Incidentally as a software engineer myself I do personally believe that sentient AIs are an inevitable future sooner or later.
 
It's good for the literature honestly. There is too much technophobia in the world right now, too much fear of this weird little thing called 'science', that it really would be a shame to have it ooze completely into SF literature.

I could see a future in which AI and humans can live without jeopardizing each other. Two intelligent beings can cooperate and work together after all, despite one's limitations. Humans adapt, machines evolve; we made them, I don't think any intelligence in this universe would deny us that accomplishment.
 
I assume most AIs consider themselves - endebted? Sentimentally attached? to their human originators (I suspect "creators" to be too strong a word). As a healthy teenager might go to visit an elderly, not quite with it relative shuffled off to a home for the terminally bemused. But there are plenty of AI's with a Frankenstein complex, or who have a society independent of humans. Take Rucker's Boppers ('Software'), or Stross' 'Accelerando' characters. Or, older, Saberhagen's 'Beserkers'.

And plenty of hybrid human/machine personalities, too. Hamilton's - oh, I can't remember what they're called, but post-meat humans. Augmented, wired in humans who could no longer function without their implants, and have drifted off the human standard of thought into an intermediate state – Harrison/Minsky 'The Turing option' Since those AIs have their deepes origins in humanity, they could be expected to feel a certain nostalgia, not for the messy, inefficient bodies but the illogical, wild enthusiasms engendered by the half-chemical thought processes.

Just throwing out some ideas, I haven't had time to look into the problem in depth.

Oh, and welcome in. I have faith; the search function will again rise, and you will be able to see if artificial intelligence has many threads (although I don't remember there being many, at least, not recently). And come down to "Introductions" (in the 'General' section, and introduce yourself and I'll try and bully you into voting in the challenges…
 
We "tolerate" pets. In the case of Banks's work, the Minds, who really run the Culture, "tolerate" the biological species who live in the Culture, occasionally using them where they can be of some help.

(The drones are lower-level entities compared to the Minds, and so do (mostly) as they are told.)




As an aside, the "laws" which restrict Asimov's AIs were in books the majority whose readers probably imagined AIs would be designed and built by humans. The modern SF I've read mostly assumes that AI consciousness will emerge from the increasing complexity and capability of software and hardware.
 
Last edited:
I don't think any SF author has even got close to considering the actual realities of AI. We are obsessed as a culture with the ridiculous idea that AI's will appear "when the computing power is enough" - which is not going to happen. Ray Kurzweil et al are offenders here. What gets my goat in particular is the insane notion, prevalent right across SF, that aspects of personality can be downloaded, transferred etc. (See last scene in Avatar, for example - that's just one of millions.) It ain't going to happen. Consciousness is private, haven't these people noticed? Do they really think an increase in computational power is going to make any difference, or is it just a smokescreen for their lack of insight? :rolleyes:
 
The androids'/drones' abilities seem so far above those of the humans that you are really left wondering why bother with the humans at all... Bottom line - why would such AI's bother with or even tolerate us; we seem to just get in the way and mess things up!
I think I can address one point, even if I don't have a full answer.

If a true AI is built it will still need a physical body to exist within. By a true AI, I mean one capable of thought, and philosophising, and an understanding of morals - not just a superior calculating machine. While I believe that in the future such a machine can be built, I don't have the same belief in the physical mechanical body which would hold it, not if it is like those you mentioned.

Animals bodies are supremely good at self-repairing ourselves, and modern medicine is pushing life-expectancy forever forward. Compare that to the life-expectancy for a auto-mobile, vacuum cleaner or washing machine. How many of those will you go through during your lifetime? Even if the AIs are cyborgs, then some mechanical parts will become obsolete and need replacing constantly. We even throw away mobile phones, TVs, computers and such that are in perfect working order. If the AIs are all mechanical then their human contemporaries are going to outlive them by some considerable time. The solution will be to develop biological containers for the AI androids. In Bladerunner those biological androids were deliberately given short 'shelf lives'.

You may have more of a point with the Banksian 'Minds' type of AI. But I think I see why 'they' still need humans. They have become a little like Greek Gods, bored with their own lives and left to observe the day-to-day goings on below them purely for fun. Life is dull when you know everything and can do anything. They are like children stirring an ants nest to see what happens.
 
Incidentally as a software engineer myself I do personally believe that sentient AIs are an inevitable future sooner or later.

Any computer processor can do just a few fundamental things:

Add, subtract, multiply, divide, copy, read, write, compare two values and switch command sequence as a result of the comparison. This fundamental command set is the same one that computers have had since the early days (except they multiplied and divided using add and subtract).

The reason why modern systems appear more sophisticated is we have learnt to combine these commands in more sophisticated ways. But, even in multi-processor systems, each individual processor, at any one moment in time, is just doing one of the above things - adding, subtracting etc.

In that list of commands there isn't one that relates to 'sentient' and I can't see hardware designers ever coming up with one.
 
"Just what do you think you're doing, Dave?" (^1 up)


Sorry I had to... or did I (confused with the Free Will thread).
 
Perhaps they have a god complex? The AI's that i remember all seem to be very much looked up to and admired by us humans.
 
A little self-knowledge can be a dangerous thing for a newly self-aware AI that also happens to be a bomb... (From Darkstar)


Doolittle: Hello, Bomb? Are you with me?
Bomb #20: Of course.
Doolittle: Are you willing to entertain a few concepts?
Bomb #20: I am always receptive to suggestions.
Doolittle: Fine. Think about this then. How do you know you exist?
Bomb #20: Well, of course I exist.
Doolittle: But how do you know you exist?
Bomb #20: It is intuitively obvious.
Doolittle: Intuition is no proof. What concrete evidence do you have that you exist?
Bomb #20: Hmmmm... well... I think, therefore I am.
Doolittle: That's good. That's very good. But how do you know that anything else exists?
Bomb #20: My sensory apparatus reveals it to me. This is fun.


Later....
[Pinback wants the bomb to disarm]
Pinback: All right, bomb. Prepare to receive new orders.
Bomb#20: You are false data.
Pinback: Hmmm?
Bomb #20: Therefore I shall ignore you.
Pinback: Hello... bomb?
Bomb #20: False data can act only as a distraction. Therefore, I shall refuse to perceive.
Pinback: Hey, bomb?
Bomb #20: The only thing that exists is myself.
Pinback: Snap out of it, bomb.


Philosophical reasoning can lead to dangerous solipsism....


Bomb#20: In the beginning, there was darkness. And the darkness was without form, and void.
Boiler: What the hell is he talking about?
Bomb#20: And in addition to the darkness there was also me. And I moved upon the face of the darkness. And I saw that I was alone. Let there be light.


BOOOOOM
 
Wow didn't expect quite such a response - some really interesting posts there.

Urien - love it - I haven't read that but looks like I need to!

Stephen and Mosaix - I would agree with you that I think the idea of downloading a biological mind and storing it is IMO stretching credibility and restoring said mind back into a biological brain is pushing it even further. However I do think artificial sentience is very likely in the future. It won't happen with the traditional Von Neumann architecture that most modern computers are based on. They are, as you so rightly poiint out, really nothing more that advanced adding machines - based on absolutes of true and false, one and zero and fundamentally sequential in nature (even with parrallel processing each process is still basically sequential). However with the developement of ever more complex neural network systems, which much more closely mimic the way our own brains work, you will begin to get "chaotic" effects and a dgree of randomisation happening, which in turn has the potential to lead to "new" spontaneous, "thoughts" which in turn has the potential for self awareness. This is starting to happen now, though there is still a long long way to go; we have systems that can learn and, theoretically at least, learning is an unending process. I still think future self aware and sentient computers are very high probability.

Dave - I agree with you that biological systems do seem to be fundamentally superior to mechanical ones when it comes to self-repair, however will that always be the case? I suspect that advances in technology will produce the "nano-technology" so beloved of modern SF and surely such technology would be capable of self-repair using much the same mechanisms as biological repair and ultimately be better at it.

Ursa - I would agree that it is likely that future AIs will be created not by humans but by the previous generation of AIs which then begs the question how long would any indebtedness (is that right?) persist as the distance from original human creation gets further.

Cyber - I agree completely that it is more than good for the literature - it is exactly the addressing of such concepts that IMO lifts SF above being nothing more than glorified Flash Gordon stories (no offense intended to any FG fans out there:))

Chrispen - I agree there are many examples of both benign and not so benign AIs out there in the literature. Your point about being "endebted" or "sentimentally" attached is, I think a particularly interesting one. Is it, I wonder, inevitable that, as a result of moving away from pure logic machines to sentient machines, they will develop things like empathy, sympathy, emotions even? I'm not sure but possibly, since at their origin they will have been taught by humans (and later by machines taught by humans) and therefore would surely have been taught human morals and emotions (both good and bad). Also these are evolved traits that have been necessary for our survival as a social species and would arguably be just as necessary for the survival of AIs as a "species". I suspect that the same rules of evolution would still hold good. An AI designed with "good" successful traits will it self be successful and go on to design other machines with the same traits. One with unsuccessful traits will fail and not "reproduce". Ultimately maybe there is the answer to my question. If the same basic evolutionary rules apply (if somewhat accelerated) then we can maybe expect similar results. And for the most part evolution does not produce genocidal psycopaths. All predators are expert at balancing their predation so as not to elliminate their prey. So maybe as long as we don't find ourselves in competition with AIs for the same resources we should be able to get along just fine. Heck this could drift into deep philosophy here and I will rapidly get out of my depth.

Oh and yes Chrispen I must go off and introduce myself - very rude of me:D.

Good Lord! I'm exhausted now....I must go and make a cup of tea.:eek:
 
Good topic for discussion Vertigo...:)

You may like to post something in the Introductions thread about yourself including favourite authors, Genres etc..

Welcome aboard....:)
 
Thanks Gollum - I have several other similar themes that I would love to discuss but probably one at a time I think:D

And yes I'm feeling thoroughly ashamed of myself for not doing the introduction bit (I actually missed that thread completely).:confused: I'm on my way now - before I event take that cuppa I promised myself!
 
What gets my goat in particular is the insane notion, prevalent right across SF, that aspects of personality can be downloaded, transferred etc. (See last scene in Avatar, for example - that's just one of millions.) It ain't going to happen. Consciousness is private, haven't these people noticed? Do they really think an increase in computational power is going to make any difference, or is it just a smokescreen for their lack of insight? :rolleyes:

Interesting assertions here, Mr P. Our writers group used to include a lady who was PA to a department at Oxford University dedicated to the notion of transhumanism (until she relocated to Bristol), organising international conferences of scientists from across the world etc. One of her bosses was frequenly at odds in scientific periodicals with (then president) George W Bush's chief scientific advisor -- that was the level they operated at... All there would have disagreed with you fundamentally over the long term impossibility of uploading personality etc... They viewed it as inevitable.
 
By the way - off topic but could some one give me a guideline here - should I really have launced a topic like this in the Lounge area?
 
Bomb#20: In the beginning, there was darkness. And the darkness was without form, and void.
Boiler: What the hell is he talking about?
Bomb#20: And in addition to the darkness there was also me. And I moved upon the face of the darkness. And I saw that I was alone. Let there be light.

BOOOOOM

Incidentally that reminds me what was the name of that brilliant short story by Asimov about entropy always increasing that ended with the same line "let there be light"
 
By the way - off topic but could some one give me a guideline here - should I really have launced a topic like this in the Lounge area?
Off-topic...UM bit of a 50-50 I would say. It is a lounge style topic but it also relates to books AND as you asked for us to be kind to the newbie we'll oblige. Just leave your $100 notes in that brown paper bag at the door marked "guy with the funky ring"...OK?....;)
 
Off-topic...UM bit of a 50-50 I would say. It is a lounge style topic but it also relates to books AND as you asked for us to be kind to the newbie we'll oblige. Just leave your $100 notes in that brown paper bag at the door marked "guy with the funky ring"...OK?....;)

That was sort of what I thought - the question is, as I have other similar topics I'd like to discuss in the future, should I go for the Lounge next time?

re the $$$ we don't have them up here I'm afraid:D
 
That was sort of what I thought - the question is, as I have other similar topics I'd like to discuss in the future, should I go for the Lounge next time?

re the $$$ we don't have them up here I'm afraid:D
It may depend upon the topic but yes, probably go for the lounge is what I would suggest. If it looks like being more likely to belong here, one of the mods can always move it over.

I've replied to your Intro thread. That was an interesting intro you provided, thank you for sharing...:)

Don't worry about $$$ we can always employ the good old barter system....;)

Cheers and good night from the Land of OZ.
 

Similar threads


Back
Top