Would an AI have its own emotions

An emotion is really nothing more than a hard-wired tropism. An organism feels fear so it doesn't have to take time to decide if the danger is dangerous. It says "eek" and runs. An organism feels affection so it doesn't have to figure out if it should want to mate with or protect this other organism. An AI might start out rationally working out everything--should I fear this, love that, covet that, etc--but eventually some of those functions will be simplified and subsumed into lower level processing, using stereotyped analytical exemplars. This could be called "emotion", although it would be similar to biological emotions only from the point of view of its effect.

As to what emotions the AI might have, they would have to have practical consequences, such as fight/flight/covet/hate/love. So if you want your AI to have an emotion that has no human equivalent, you need for your AI to have some kind of tropism or need that also has no human equivalent.
 
An emotion is really nothing more than a hard-wired tropism. An organism feels fear so it doesn't have to take time to decide if the danger is dangerous. It says "eek" and runs. An organism feels affection so it doesn't have to figure out if it should want to mate with or protect this other organism. An AI might start out rationally working out everything--should I fear this, love that, covet that, etc--but eventually some of those functions will be simplified and subsumed into lower level processing, using stereotyped analytical exemplars. This could be called "emotion", although it would be similar to biological emotions only from the point of view of its effect.

As to what emotions the AI might have, they would have to have practical consequences, such as fight/flight/covet/hate/love. So if you want your AI to have an emotion that has no human equivalent, you need for your AI to have some kind of tropism or need that also has no human equivalent.

Here's a possible one, but I'll have to support my suggestion.

AI will at least initially be designed (or at least designed to evolve) to perform tasks, most of which will be either computational or organisational. The evolution algorithm will have to have tropisms towards more effectively performing those tasks. Now:

Most computational tasks are performed better with more hardware to throw at the problem, especially if they are of a sort that is intrinsically suitable for parallel operation. Computers (particularly parallel computers) are different from human adult brains in that there is no theoretical (so far) reason why increasing the amount of hardware available will not improve task performance. The world's biggest supercomputer (the BOINC project, yes I know it's virtual) is an example of this.

So the tropism might be towards increasing hardware capabilities, by means of building more (if the computer can control the external world) or appropriating hardware that is not doing anything right now - like a Trojan designed to create a botnet, for example. What one might call such a tropism/emotion is an interesting question. It's not quite like anything any animal, including humans, has. "Self-improvement" is close, but not quite right, and it definitely isn't hunger or sexual desire; the impulse is towards growth, not reproduction.

So how's this for a disaster scenario? Someone decides that the best way to do some task (weather prediction, maybe, or something like prediction of customer reaction to new products, or running a realistic game universe) is to set up a really big neural network and machine-learning system (which already exist!) which has been designed by some sort of evolutionary algorithm - and then connects it to the Internet to collect data.

And unleashes a monster. Some SF on the subject calls such an object a blight. Good name.
 
If your AI was an evolving program like Skynet, built to run on the various computers of the internet, but you create it specifically just to mine data, sort it and store it in a database so it can be analysed and used for statistics, you are right they exist.
But there is self-evolving and self-evolving>.>
because it is built for the task and has rules and bounds governing how it changes or how it improves its own programming, also limitations caused by the quality of programming as well. then the chances of it doing anything harmful are remote even over long timescales.

I would think anyone designing such a thing would design it to evolve to become more efficient, it might start learning how humans are using the statistics it finds and optimizes in order to find those particular things faster for example.

The most danger from a self-evolving AI comes from those designed for an aggressive purpose. If you said, I want to be able to access all data because I am the government no passwords should keep me out so lets create an AI that hacks everyone, it uses social media, email, data mining to breach passwords by targeting humans, this is the most efficient method of getting passwords not brute forcing the encryption.

An AI like that would be extremely dangerous if it could adapt and become more efficient, the danger would not be physical at first. gradually it would find security cameras, read lips, stalk you in the physical world to crack your passwords. it would find out your children's names, where they go to school. what if it couldn't guess the password. maybe it creates a psychological profiling subroutine as a part of its hacking methods.
what if it decides threatening a persons family for their password is efficient.

And it could get worse from there. but still, its poor decision making that leads to people creating things like that and internet filters and so on. hopefully, once the internet generation gets into political office such backwards thinking will end.
 
Current AI are programmed, they do not evolve naturally.

Not strictly true, and that's where the perceived danger comes from.

"Simple" AI (expert systems, RPA) is programmed with IF/THEN logic. Complex, yes, but still coded by humans. Machine Learning is the game changer. Machine Learning is an add-on for most commercial AI platforms, like Watson.In machine learning, developers develop the learning algorithm, then the AI needs to be trained. The rub is, we don't know what it learns, we can only infer what its learned from what it then does. So sample bias is incredibly dangerous in training AI. There's plenty examples of AI learning the wrong things and becoming racist, thinking tanks only exist in daytime, etc.

Right now machine learning tends to get deployed to optimize against a specific, measurable goal - like make more money trading equities, or to pattern match and predict - like diagnose cancer, or forecast stock demand - all real world examples of AI today. In most (all?) cases the AI has access to a limited (massive, but still limited) data set, and has "hardwired" connections into other systems (actually more likely APIs, but whatever). It can't access things its not allowed to. Goldman Sachs' equity trading AI can authorise funds transfers because its plugged into a clearing bank - it can't see your medical records.

The Elon Musk scare stories come when you add things like Microsoft Deepcoder (it develops it's own code, so in theory could hack) to AI optimizing against an outcome with badly articulated optimization constraints (don't hurt people - this is where I think Azimov was a genius). In theory, given even today's limited tech, an AI that was given control of, say, a precision agriculture operation (John Deere, Monsanto and others already do this) and told to optimize crop yield, could make decisions that could harm people... like if it decides the farmer is trampling crops, so removing the farmer = better crop yield, therefore poison the farmer with pesticide. This gets sci-fi-esque by today's standards, but its not a massive leap of logic/faith/technical capability to imagine how some poorly defined optimization rules could lead an AI to want to off people. And that's not even thinking about military AI that (already does) make kill decisions.
 
Just want to point out that not everyone agrees to this as the entire assessment of emotions.
Emotion, Theories of | Internet Encyclopedia of Philosophy

Yes, there are quite a few hypotheses (as opposed to theories).
It's an indication of the parlous state of such studies that we still don't have a generally agreed theory of why conscious human beings experience emotions. Extraordinary.
Alas, due to pressure of work (author work, not the 9-5), I don't have time to write more about all this and in more depth, as I'd very much like to. If any readers of this thread are interested, my pair of novels Beautiful Intelligence and No Grave For A Fox deal with such issues in a near-future setting.
:)
 
As to what emotions the AI might have, they would have to have practical consequences, such as fight/flight/covet/hate/love. So if you want your AI to have an emotion that has no human equivalent, you need for your AI to have some kind of tropism or need that also has no human equivalent.

The only "practical consequence" of an emotion is the somatic component, which all emotions must have. They are cognitive.
By the way, hate and love imo are not emotions - they are the sources of emotions.
 
I believe so, though we wouldn’t likely recognize them as our familiar ones. Hormones and such in humans really just serve to emphasize a different set of network paths in the brain, which could be simultated in some way, allowing some network links to take precedence under particular conditions. The resulting shifts in behavior would be the essence of an emotional state.

I suspect they would be very narrow and goal-oriented, based on the AI’s purpose. I doubt they would emerge and coalesce onto the specific emotions humans are familiar with, which were born out of a long, specific, evolutionary process.
 
Not that this answers the question but.

Might I suggest folk read Dogs of War, by Adrian Tchaikovsky.
By taking "AI Learning" from Personal Interest etc, and running with it he's hit the nail on the head as to WHY anyone (or business?) would develop emotional AI - and how they would go about doing it.
It's the best way to lose a weekend too.
 
To answer to original question: NO.
Modern computers are no more than abacus. Everything inside them is wired on a binary logic. Logic is the key word here. Inputs and outputs to a computer needs to be logical. And that's because their function is to enhance our abilities.
I know science fiction writers predicted artificial intelligence and robots long time ago, but the truth is that their predictions are not real. Humans need efficient tools, tools to solve problems, they do not put time and effort in researching and producing all-purpose device with low efficiency. Robots like Asimov's are still light years from current tech development, if they are even going to happen.
 
If one subscribes to the 'hypothesis' that emotions are cognitive[honestly I'd only go as far as to suggest that they are hooked into cognitive only as we learn and that there are some base instinctual origins to emotions that underlay the 'learned' emotions] and add to that the push toward making cognitive computers then the answer would be shaky maybe.

Even so we would have a difficult time identifying them because they would be much different from our set of emotions (whether they require a base from which to draw or not).

I would subscribe to a 'hypothesis' that emotions are there to begin from conception.
As we grow and learn the three components develop, alter and steer the way we process them.
The Physical- outward appearance (The automatic response we can't control)
The Behavioral- how we act ( the social end--or )
The Cognitive-(the conscious experience)
I've taken license with the three components so I expect objections. It would be best to try to research the various hypothesis and come to your own conclusions.

Ultimately the biggest hurdle for my hypothesis and AI with emotions would be the base from which to build-- however if you subscribe to the emotions being all cognitive then a cognitive computer would have potential--without worrying whether there is a need for a base from which to start.
 
I wonder if it is possible to have true intelligence without having curiosity. But will any other "emotion" just be an artificial program created by humans to make an AI imitate humans and possibly making it insane.

To date I consider The Two Faces of Tomorrow by James P. Hogan to be the most reasonable AI story I have read. It may not be the best "writing" of a science fiction story but it is SCIENCE Fiction.

psik
 
Short answer, no. It would be a waste of data for AI to have emotions. Even if we go quantum computing with AI, I still think it would be a waste of data.
 
Short answer, no. It would be a waste of data for AI to have emotions. Even if we go quantum computing with AI, I still think it would be a waste of data.

But will a true AI give a damn about what humans regard as a waste of data?
 
"Wasted" data doesn't make any sense in this context because the very notion of waste implies a value judgment. For example, programmers routinely build structures into the code that won't necessarily be used in the current release but will facilitate changing the code in the future. They also build in things like back doors, dummy data and so on. Computers don't regard that as wasteful, in part because computers don't evaluate worth but also because computers simply don't regard. Anything.

So, if computers *did* regard a certain kind of data as a waste, the computers would ipso facto have emotions.

The same goes for giving or not giving damns.
 
Computers don't regard that as wasteful, in part because computers don't evaluate worth but also because computers simply don't regard. Anything.

A computer that behaves in that manner would not have AI, or at least this hypothetical AI that is supposed to comprehend significant aspects of reality. Our problem is imagining what such an AI would be like and if we can make one. My personal suspicion is that most human emotions would not apply to it and yet a lot of the hype about AI implies that it would be like us. A bit of a conundrum.
 
We need to start out by defining what an emotion in humans is. Let’s start with:

A chemical reaction that makes us feel good or bad and so makes us act in such a way as to avoid or repeat the activity that caused the reaction in the first place.
 
There seems to be to Schools of Thought here:

>Of course an AI wouldn't have emotions!

and,

>Hey, it's Fiction, man!

:D
 

Similar threads


Back
Top