Sci-Fi Research?

Vaz

We're in the pipe, five by five.
Joined
Jun 7, 2015
Messages
1,472
Location
Liverpool, United Kingdom
Hey Folks.

I'm about to dip my toe into writing some Sci-Fi stories - I usually only ever write fantasy - and I feel like I'm not really qualified to do so. Like, I feel that I should research some ideas on society - how would it adapt to AI life? How would it evolve? Things of that nature.

Just wondering if any of our writers here could suggest some books, fiction or non fiction as research, or some general ideas to point me in the right direction?

Sorry for the ramble!

Thanks in advance.

V
 
For anything to do with AI and nanotechnology, Drexler and Kurzweil (sp?) are quite handy to read. Kurzweil has a site on the Web exploring his ideas:

Ray Kurzweil - Wikipedia, the free encyclopedia (wiki about him, with links to his sites) Kurzweil Accelerating Intelligence (his own site)

K. Eric Drexler - Wikipedia, the free encyclopedia (similar about him)

Marvin Minsky started earlier in this field: Marvin Minsky - Wikipedia, the free encyclopedia

Oops, one more: Vernor Vinge (inventor of the term "singularity" as applied to technology and AI): Vernor Vinge - Wikipedia, the free encyclopedia

And finally, a site exploring the next 10,000 years of future history with the Singularity (in fact several of them, read further there!) http://www.orionsarm.com/

WARNING: Some of the work by the first three is a bit impenetrable; full of complexity theory and rather a lot of neologisms. Prepare to have your mind expanded! Ditto some of the articles in OA.
 
There's lots of fiction and non-fiction on the subject, but everyone's opinion differs. The first thing you want to do is work out how advanced the AI is, then if there are control measures in place (Asimov's Three Laws Of Robotics, for example). Then when you have your level of AI just really think about impact on all aspects of human life and what will lead off that. As an eg, let's say a boom in more prolific and capable robotics would put many people out of jobs (construction, service and military especially) so poverty will be rife and the rich-poor divide would grow, therefore how does a growing population cope with that? Authoritarian control of the masses, population culls, anti-AI resentment, wars? Partly depends how far into the future you want to go. The creation of AI or the dystopia following the Singularity? The above non-fiction suggestions are good to gain a scientific basis. Or a far future where humans and AI live alongside each other? Plenty of fiction authors out there to give you a feel for it (my favourites would be Isaac Asimov's Robot stories and Iain M Banks's Culture novels). Don't forget, though - the world you create is just the backdrop, the characters are the story.
 
I wrote about AI's in a near-world future, my research focused primarily on brain function and developments in stem cell technology that allowed lab grown human brains to replace CPU's. Problems that had to be overcome were things such as how the brains could be kept alive in an AI. I did a lot of research, but very little of it is apparent in the book. I just wanted to make sure the basis for the technology was sound. As @Dave Barsby has already mentioned the story world is as important as the AIs themselves, the future of AI technology throws up an awful lot of questions.
 
  • Like
Reactions: Vaz
Thanks so much for all your responses.

Just bought Asimov's complete foundation books, as well as some of Ray Kurzweils.

Looking forward to diving into them tonight! :)

V
 
Thanks so much for all your responses.

Just bought Asimov's complete foundation books, as well as some of Ray Kurzweils.

Looking forward to diving into them tonight! :)

V

Don't want to rain on your new purchases but I can't recall any AI in the Foundation books. Damn good classic sci-fi reads, though (well, the first three at least). His I, Robot short story collection is more what you're after.
 
In terms of philosophy and social implications, you can have a wonderful time watching Battlestar Galactica and Caprica. They deal with some wonderfully deep parts of AI including spirituality, and latterly in the blindingly good Caprica (more drama) the appropriation of it by Organised Religion.

Of course you'll need over 100 hours spare. :eek:

pH
 
Hey Folks.

I'm about to dip my toe into writing some Sci-Fi stories - I usually only ever write fantasy - and I feel like I'm not really qualified to do so. Like, I feel that I should research some ideas on society - how would it adapt to AI life? How would it evolve? Things of that nature.

Just wondering if any of our writers here could suggest some books, fiction or non fiction as research, or some general ideas to point me in the right direction?

Sorry for the ramble!

Thanks in advance.

V


I find ORBITAL VECTOR HOME PAGE to be quite useful. It's better than atomic rockets since atomic rockets sticks to only what we currently know hot to do (NASA and project orion).

I have done so much research that I have only done a bit of writing. But I am at the point that I know more than enough to actually write a story without ignoring physics in the areas that I don't want to.
 
  • Like
Reactions: Vaz
I find ORBITAL VECTOR HOME PAGE to be quite useful. It's better than atomic rockets since atomic rockets sticks to only what we currently know hot to do (NASA and project orion).

I have done so much research that I have only done a bit of writing. But I am at the point that I know more than enough to actually write a story without ignoring physics in the areas that I don't want to.

Ooops... sorry, I though you wanted general scifi info.

At any rate... I find Portal 2 to be a great inspiration for any rogue AI story. I don't know how far you plan to go with sticking to realism. But if you google it, you will find that even people that are INTO making AI work admit that machines and binary code alone won't make anything resembling a human mind.

It won't have free will. It will only do what it's programmed to do. It literally can't disobey you, even if it wanted to (and it won't even want to).

The only way you will ever have a robot who acts human is if you added some biological material to the mix. Even GLADOS's (the main villain from the portal series) name meaning hints at this (Genetic Life-form And Disc Operating System).

If you use an animal material instead of human material for your AI, don't expect to get human-level intelligence out of it.
 
  • Like
Reactions: Vaz
Ooops... sorry, I though you wanted general scifi info.

At any rate... I find Portal 2 to be a great inspiration for any rogue AI story. I don't know how far you plan to go with sticking to realism. But if you google it, you will find that even people that are INTO making AI work admit that machines and binary code alone won't make anything resembling a human mind.

It won't have free will. It will only do what it's programmed to do. It literally can't disobey you, even if it wanted to (and it won't even want to).

The only way you will ever have a robot who acts human is if you added some biological material to the mix. Even GLADOS's (the main villain from the portal series) name meaning hints at this (Genetic Life-form And Disc Operating System).

If you use an animal material instead of human material for your AI, don't expect to get human-level intelligence out of it.

I think that the general opinion is that AI is not going to be possible without a-life and some form of learning process. However, I disagree with you about the impossibility of non-biological AI; what's so special about the complex web of nanomachines that comprises life, anyway?

It might and, to some people does, appear that sapience is an emergent phenomonon; that consciousness itself is an unreal construct, which arises as an emergent phenomenon from a complex, highly interconnected web of simpler processes inside (so far only as far as we know) the human brain; and it only arises there because the human brain is complex enough to support a sufficiently complex web. Incidentally, it appears likely that sapience also arises (to greater or lesser degrees) in various other sentient organisms on Earth; opinions vary, but the list appears to include the "anthropoid" apes, dolphins and orcas, various other whales and possibly elephants.
 
Your argument is basically that if evolution can somehow make a bunch of order/designs in nature that have strict fail/or succeed tolerances... then surely the possibility of man making sentient life from non-living matter is possible? Something without intelligence created something that is not randomly designed but ordered stuff, so a being with intelligence (man/insert your intelligent scifi race here) should be able to create a new life form from non-life?


I have never bought into that... never will.


That is essentially making nothing/randomness/blind chance a god, and man the product also a god who can create life from nothing.
 
Last edited:
Sorry to rain on the parade - but why look to sf about what AI might do to culture etc. Why look to others to inspire your take on it? Look at the world now, and ask yourself 'what if....?' Being original and not led by others' 'what if....?' is something very, very special (and what all the writers named above did.)

J :)
 
Your argument is basically that if evolution can somehow make a bunch of order/designs in nature that have strict fail/or succeed tolerances... then surely the possibility of man making sentient life from non-living matter is possible? Something without intelligence created something that is not randomly designed but ordered stuff, so a being with intelligence (man/insert your intelligent scifi race here) should be able to create a new life form from non-life?


I have never bought into that... never will.


That is essentially making nothing/randomness/blind chance a god, and man the product also a god who can create life from nothing.

It's a lot simpler than that. A sufficiently complex replicating network of nanomachines has the potential to be alive and sapient. How do we know that? Simple; there is an existence proof. And where is that proof? Look in the mirror. Such networks replicate millions of times per day, taken over the whole Earth.

(Nitpick; it is well known that said network in the existence proof requires, for its replication, the assistance of another such network of slightly different design; it is also well known that the resulting product is not an exact copy of either of the producers.)

Given that it is possible, I refuse to believe that there is something privileged about the design of the already existing such networks, and that it is impossible for sapience and/or life (both concepts being sloppily defined, but they do exist) in machinery of different design; for example, silicon or carbon nanotubes rather than protein and DNA.

I think it's also worth noting that sapience arises from non-sapience, apparently spontaneously, also millions of times per day. This is just my opinion, but it's my contention that a newborn baby is not sapient; although it has the potential to become such, and therefore should be respected as if it already was.

I actually think that sapience in computers will arise without our noticing, and not by anyone's specific design or action - but as a result of ever-increasing sophistication in AI, put into them for the convenience of humans. There are already computers that do a fairly good job of pretending they are sapient, although they are rather easy to trip up and expose the "lie"; but they are getting better, and doing so rapidly. You probably have one in your pocket right now.

I think that machine sapience is going to arise a little like this; that one fine day, someone is talking to the umpteenth generation of Siri or Cortana, and suddenly realise that he/she/it has passed the Turing test. And when that happens, continuing to insist that the machine isn't sapient will sound like, and be, nitpicking. "If it looks like a duck and sounds like a duck, it is a duck."
 
I actually think that sapience in computers will arise without our noticing, and not by anyone's specific design or action - but as a result of ever-increasing sophistication in AI, put into them for the convenience of humans.

I believe this to be true, which makes for interesting fictional scenarios. When I was at school, one of our class books told the story, as I remember it, of how the world starved because the robotic factories could not make any food because the robotic machinery did not cut the crops because the old man who unlocked the barn where the machinery was kept died. Nobody noticed that the old man had died because in this technologically advanced world he wasn't important. After all, his only job in life was to unlock a door.

Just out of interest, does anyone know the name of this story?
 
Luckily for you, a professor called Yuval Noah Harari has just published a book called Homo Novus on this sort of things. Unfortunately, according to the review I just read, it's a total dystopia in which the super-rich rise to godhood (from the looks of Trump, Eccleston and similar specimens, I'd say they've got a fair way to go) and the rest of us become worthless blobs.

But I agree with Jo on this one. The trouble with a lot of near-future SF and prediction is that it exaggerates the present instead of realistically predicting the future. 1984 is a satire of the present and the recent past. Who, in the Cold War, would have thought that we'd end up fighting Islamists? They probably reckoned we'd end up a super-cold war, either in a nuclear winter or in a John Le Carre novel with extra androids. Even when SF does try to predict the future, it rarely considers the backlash (Dune, with its anti-robot jihads, is a notable exception). Take one of those 1980s futures where amoral corporations own everything (you know, the ones that feel uncomfortably accurate these days). The citizens in such stories often just become poorer and poorer until they rise up, and then the story ends. Why don't they rise up earlier, when it's clear what the end result will be for them? Isn't that more realistic?

So what I am saying, at the end of this rather rambling post, is that it is exceptionally difficult to predict the future of AI and what it will do. It could go half a dozen ways and, depending on how you write about them, they could all be credible. And does it even have to be credible? Do Androids Dream of Electric Sheep isn't really very believable as a likely depiction of the future, but it works because it has a point to make and it is psychologically believable. I think you should write what seems like an entertaining story to you and go from there.
 
Watching Star Wars a thousand times?

Watching sci-fi films and reading books are a start, I don't have much knowledge of science but once I know what I am writing, I try to look up some of the premises and whatever on the internet.

But I'm still unpublished, so what do I know?
 
In terms of SF, it's sort of related*, but if you haven't read it, I'd wholeheartedly recommend Permutation City by Greg Egan.

Also as Toby has mentioned, there's a reasonable amount of discussion of AI and what it means for humanity in Philip K. Dick's work, although for some reason I always gravitate to the surreal short story The Electric Ant...

...and just to balance the blind optimism that we actually will achieve AI, whatever that means, the argument in Roger Penrose's The Emperor's New Mind is a reasonable stab at how AI is unachievable (at least with computers). At the very least you will get an account of Turing Machines, Quantum mechanics, Godel's Theorem etc... Well written in explaining science and philosophy and thought provoking, if a touch nebulous in conclusion, I admit**

--------------------------------------

* computer simulations of consciousness and life rather than AI per se - but taken to the nth degree, as it should be in a SF novel
** Partly down to the fact that no one knows how the mind works, a fact that most hard practitioners of AI tend to conveniently ignore.
 
  • Like
Reactions: Vaz

Similar threads


Back
Top