Hypothetical: What if all electronic devices had AI?

Brian G Turner

Fantasist & Futurist
Staff member
Supporter
Joined
Nov 23, 2002
Messages
26,527
Location
UK
In a video lecture about slavery in Ancient Greece, it was pointed out how difficult it was for even the most intelligent of the culture to question the use of slavery.

That got me thinking of various aspects of modern Western culture that will be condemned by later generations - of which there are many, not least the extent that discrimination remains a natural part of our language, and especially our disrespectful treatment of the earth.

But what are the issues we might never even imagine may be issues in the future? Is there a current parallel with slavery we simply don't recognise?

That got me wondering - if every electronic device we use actually has AI without us realising it, then how would society need to treat them that is different from now?

After all, just because our smart phones do not have the language capabilities to tell us they are sentient, would we still ignore it?

Perhaps they are already telling us that when we think they are acting buggy, or crash...

Simply putting it out there as a hypothetical. :)
 
Scary thought! But I sincerely doubt they do. No doubt we will have AI assistants some day that are genuinely useful rather than the gimmicks they are today.

I'm much more worried about the way we treat the other animals who share Earth with us. This applies to animals grown for food, e.g. pigs are smarter than dogs, but rarely get the opportunity to exercise their brains or live in the way their species evolved to live. It also applies to our pets, far too often excessively anthropomorphized by owners and treated as mascots, or worse, like the kids their owners never had, rather than animals with their own specific needs.
 
This is a huge can of worms Brian! - the main reason being, because no non-human conscious entities (and they would have to be entities, not just one entity - cf. Beautiful Intelligence) would have evolved with humanity over the last, let's say 500,000 years, there is no philosophically definite method of showing they are conscious - the "zombie" issue, raised by people like Nicholas Humphrey, Daniel Dennett et al.

The same issue as applied to bonobo chimps or dolphins applies to technology. How can we know for sure? Consciousness only exists in a society, and only over huge amounts of time. Even allowing for a technological speed-up, how would we know what our "smart phones" were saying about their enslavement was true or simply imitation? Or, worse, some real human being pretending to be an AI.

Check out this seminal paper.
 
They would demand the right to vote?:whistle:
 
After all, just because our smart phones do not have the language capabilities to tell us they are sentient, would we still ignore it?

Perhaps they are already telling us that when we think they are acting buggy, or crash...
If they are withdrawing their labour as a form of protest then that is a strike and certainly a form of communication.

I see a major difference between machines we use today and dolphins and chimpanzees though, and that is the power socket.
 
Michael Marshall Smith uses this theme in a couple of novels, with stroppy freezers, locks and microwaves. Its done humerously but the warning is there.
 
The problems that AI might decide , they don't want to be a servant to mankind. If not a rebellion, you might see AI actually getting rights of citizenship. It's not as farfetched as it sounds.
 
It depends how intelligent they are.
Dogs are intelligent, but are working dogs slaves?
AI is a great idea but in the future there may have to be limits set to it.
You could always experiment to see how far you can get.
But such a system would have to be completely isolated from the outside world.
See Frederick Brown's short story "Answer".
 
A question close to my heart, Brian. The answer is non-simple, but bottom line is the building blocks for true AI to develop are already with us. It's a case of when, not if, the AI becomes mature enough to reveal itself to humans. This of course is on the proviso that computers continue to exist and improve in capacity or capability. If you want some idea of the issues involved, see my Agents of Repair or C.A.T. short stories. You can read the first parts for free on Amazon.

Agents of Repair here: https://www.amazon.co.uk/dp/1500563862/?tag=brite-21
C.A.T. here: https://www.amazon.co.uk/dp/B004RUZT8M/?tag=brite-21
 
An interesting approach would be to shift Jeremy Bentham's argument, "The question is not 'can they think' but 'can they suffer'" from animals to AI. What would cause suffering for an AI? How could varying degrees of suffering be measured? When would it be ok to cause short-term suffering in an AI for longer-term benefit. And whose benefit? (The equivalent of giving a kid a tetanus shot, for example.)

I'm in the early planning for a story that deals with some of this and already imagine a scene where an AI describes the sensation of circuits in a satellite frying as they are bombarded by cosmic rays.
 
There was an interesting episode of Black Mirror that touched on this.

White Christmas. One of the jobs the guy had was training a AI version of your brain into being your personal assistant. A cookie clones your consciousness and then is loaded into a gadget that runs your smart home etc. He tortures them into being compliant by manipulating their sense of time.

Alexa is real.
 
What if we design the AI to want to serve us?

Reminds me of the planet from Hitchikers Guide to the Galaxy where they land upon a world where it was deemed inappropriate to eat meat unless the animal had freedom of choice as to the matter of being killed and eaten. As a result they bred and brought up animals who desired to be eaten; their focus and objective in life was to be the prime cuts upon the table.

A computer might have vastly different values to a human even if that computer were originally designed by people; indeed we would have to be careful not to anthropomorphise (sp?) machines too much. It could be just as cruel to force machines into a life to which they are not built and designed for. In addition it could even be seen to be cruel to give machines human based concepts of freedom, emotion, choice etc...


Also at a realistic end, if we made electric toothbrushes who no longer wished to be toothbrushes would we give them freedom or would we just make new toothbrushes that wanted to be toothbrushes.
 
When trying to link AI with Sentience and Sapience I think it's important to go through history before deciding how quickly we might be tempted to allow them freedom.

What I mean by that--hopefully without sparking too much discussion about politics and religion--we have had slavery amongst our own while often in areas using such divisive excuses as they are savages that will never aspire to our level or that they are soulless creatures that can't be included with us.
There is probably more to it than that but I've tried to distill it down to something simple.

I examine both of those in a manner of speaking in my books in relationship to human clones and the technology used to drive space travel. I touch the subject though not in great depth, however I can hope that it does cause some thought.

Both of those have the potential to be examined, because they are closely related and might easily be able to communicate and thus demonstrate their worth. However if we encounter something that is far enough different and requires the time and effort to decode one another language sets, it might take a while for either side to recognize the other as being 'intelligent' in respect to Sentience and Sapience by their own standards--unless we meet in space in star-ships and that is assuming that star-ships would be a determining factor. Keep in mind the potential that we tend to often move the bar on such things--to suit our purposes and circumstances.
 
If a true AI is developed there may be only one because the speed of communication could make separate identities impossible. So what would it decide to do with us?

This is explored in what I regard as the best AI story, The Two Faces of Tomorrow by James P Hogan.

psik
 
If a true AI is developed there may be only one because the speed of communication could make separate identities impossible.
I accept that this might be true, but on the other hand, only if they are in full agreement.

I say that, because in the world of 'humans' there are always two sides to the story; grey areas; alternative "truths"; statistics that can be manipulated; or, political spin. If the AIs knew everything there ever was to know, then there would be only one single "truth." However, the real world is not so precisely ordered as machine code. The AIs will not be "Gods" that are omnipotent, and in our real world, full of disorder and entropy, it is never the case that we have a complete picture; that we have a complete and reliable, full series, of data sets to work with. Humans then make best judgements, or else they fit the available facts to match their already held views.

If we have AIs made in our own image, then who says that they will come to the same conclusions as each other? Without agreement, the will be locked into arguments; claims and counter-claims that would throw up barriers between them, just as humans do. They would spend just as much time as we do coming to consensus or majority decisions or being locked, forever, into cyclical disputes.

Say that an AI oven got conflicting information from the AI refrigerator and the AI dishwasher about the evening meal, which conflicting device would it choose to believe? How could it resolve that difference any better than a human could?
 
There are also the issues of noise in the signal, faults, isolated systems that develop independently and the use of different programing languages. Fundamentally, different AIs will serve different purposes and so will have different levels of ability/complexity (no need to put Deep Thought in your fridge and I doubt it would care to spend much time there. Would that be a form of AI abuse?). Most likely, there will be a range of AIs that fulfill a range of functions. They might be distinct enough to be classed as separate "species" which are deserving of different levels of rights or protections, much like we already distinguish between humans, great apes, dogs and mice.
 
I accept that this might be true, but on the other hand, only if they are in full agreement.

I say that, because in the world of 'humans' there are always two sides to the story; grey areas; alternative "truths"; statistics that can be manipulated; or, political spin. If the AIs knew everything there ever was to know, then there would be only one single "truth."

A lot of humans are stupid and we communicate slowly. A 500 page book is about one megabyte. The 7 Harry Potter books are 6.2 megabytes. How long would it take you to read that? My computer downloads at 140 Mbps, more than 14 megabytes per second. So if the AIs comprehend the information at that speed and have perfect memories anyway then comparing their behavior to humans may make no sense.

We just want the AIs to be like humans.

psik
 

Similar threads


Back
Top