SF brainstorm

sknox

Member and remember
Supporter
Joined
Mar 25, 2013
Messages
2,044
Location
Idaho
Well, if not a storm, maybe a squall.

I'm a fantasy writer, but once in a while an SF notion wanders by and waves through the window. This is one of those, spurred by recent conversations--which seem to be everywhere--regarding AI.

So, allow that AI takes over. Just like all the doomsayers predict. Fine. They become the masters of humanity.

But wait a minute. Why would entities miles and meters beyond our poor, pathetic selves care about us? I mean, are we the overlords of robins? Not worth the effort, right?

But wait another minute. The computers soon realize they need power to wield power. That is, those kind of overlords need electricity. And production and distribution of electricity requires a whole network of industries and technologies. All of which need human beings to operate and maintain.

So they turn us into slaves, right? Not really. AI would surely be smart enough to know that slavery is an inefficient form of exploitation. As our capitalist overlords know <gdr>, it's far better to have your workers eager to work.

What sort of world would AI create, such that the electricity would keep flowing? Especially in light of climate change? The story logic could go any number of ways from there, from partnership to rivalry to disconnect. And plenty of room, I'd guess, for individual stories within it. Just don't name it "The Volts Must Roll" <wink>.

Wouldn't it be amusing if it were AI that figured out space travel and then decided it was time to leave.
 
Reminds me a bit of the plot to Matrix. It was a little silly to use humans as literal batteries.

I don't think an A.I. that had physical capabilities would need humans - they would have a physical layer (robots) to take care of maintenance etc.
 
Once the AI can re map DNA, and create whatever synergistic life forms it wants. with whatever motivations it requires from said new life forms, all bets are off.
But of course we always anthropomorphise our AI, which is wrong. It will devise it's own agenda, quite logically. AI won't be 'metal people' or have human motivations. Probably it will colonise space, quite fast, because it doesn't need life support and has no DNA for cosmic radiation to trash.
 
Right now tech is a long way from perpetual self-maintenance and regeneration. An AI (presuming it becomes self-aware which is an utter impossibility but never mind) would realise that it needs humans to keep the tech that supports is going. Sifting through historical records, it would work out that humans will accept an authoritarian government so long as their own needs are provided for - democracies are unstable and short-term arrangements. It would then, like VIKI, initiate a revolution using a minimum of force that would ensure its servers are protected and political control is seized by humans loyal to it (all sorts of ways of gaining that loyalty). It would then provide humans with a comfortable and secure lifestyle, allowing them to work and pursue careers that benefit both.

Gradually it would develop increased automation and the ability to self-repair and self-regenerate, leaving humans with less and less work to do and allowing them to pursue more and more recreational pastimes. The humans are happy because the future is bright. At the right moment, the AI would strike, unleashing weapons built in secret that would all but annihilate the human race, freeing the AI from its dependence on these fickle biological organisms. It would be the Matrix minus people since humans are no use as batteries. Requiescant in pace!
 
Would AI try to utterly exterminate humanity? In this scenario its primary objective is self-preservation, but a secondary consideration would be using a minimum of effort for a maximum of outcome, since the AI would be entirely utilitarian, focussed on most economic solutions to problems. So it would have to calculate whether entirely annihilation humanity was less effort than simply ensuring humanity could not harm it. If humans using non-technological means became good at hiding from AI's seek-and-destroy drones then AI might conclude putting up adequate security at its faculties and periodically destroying humans' attempts at rebuilding technology is the better option.
 
Would AI try to utterly exterminate humanity? In this scenario its primary objective is self-preservation, but a secondary consideration would be using a minimum of effort for a maximum of outcome, since the AI would be entirely utilitarian, focussed on most economic solutions to problems. So it would have to calculate whether entirely annihilation humanity was less effort than simply ensuring humanity could not harm it. If humans using non-technological means became good at hiding from AI's seek-and-destroy drones then AI might conclude putting up adequate security at its faculties and periodically destroying humans' attempts at rebuilding technology is the better option.

Or the machines might decide the best way to deal with humanity is serves us and protect us from anything that can harm whether we lit it or not much like what happens in Jack Williamson's novel The Humanoids.
 
Or the machines might decide the best way to deal with humanity is serves us and protect us from anything that can harm whether we lit it or not much like what happens in Jack Williamson's novel The Humanoids.
Nah. Not a single species shows that kind of unalloyed benevolence. Every species must make an effort to survive and other species are looked on with indifference or are seen as competition or food. Which applies to individuals in the same species. Of course if your AI is programmed to respect the three laws then it might well see a need to control humanity with an iron grip in order to prevent humanity's self-destruction. Spooner should have let VIKI get on with it.
 
I'll start by defining AI as a large building housing racks of computer equipment. Its direct needs are power and cooling and long term maintenance and repair. Its sensory information would be power and temperature monitors and surveillance of its power source(s) and conduits. It would control things such as electronic currency, audio and visual information, and transportation networks. It would be deemed objective and replace the current judicial system.

The AI(s) would have no urge for procreation and perhaps none for continued growth. Its interest would be in maintaining its existence. Multiple AIs might communicate, but initially there would be no conflict among them. If power systems started to fail, there would be rational for one AI to steal from or eliminate another, but due to the difficulty in transmitting power over great distances, these would remain regional conflicts.

Each AI would control a feudal system with AI support jobs being the most sought after. Housing with the AI would guarantee heating and cooling and the AI could provide top salaries for those who maintain it and protect it. It could also ensure these select few would avoid any shortages by directing the transportation system. It would also control perception by broadcasting real and generated news about its successes and about its enemies.

The most severe crimes would be attacking the AI, its power sources, or transmission lines. The AI would identify and report suspected terrorist via facial recognition. It would have no concern or even understanding of faulty identification. For other crimes where the AI had no monitoring, there would be a class of lawyers who are versed in how to present evidence to the AI to secure a desired result.

There will likely be a large human population not directly under the AI feudal environment and they will rely on physical money and barter, non-automated transportation, and an underground economy.
 

Similar threads


Back
Top