Would an AI have its own emotions

EJDeBrun

Well-Known Member
Joined
Oct 11, 2016
Messages
370
So I'm writing a short story and I've come to a crossroads in regards to the technology so I'm reaching out for opinions.

Based on the DeepMind neural network technologies currently being studied, do you believe that the Artificial Intelligence systems of the future will develop their own emotions?

This is a total free for all question. I'm interested to hear in the fors and against. From my research, while it's critical to create AI's that can read and react to human emotion, there seems to be an ethical reasoning against the AI's developing emotions for themselves.

Thanks in advance for everyone's input.
 
So I'm writing a short story and I've come to a crossroads in regards to the technology so I'm reaching out for opinions.

Based on the DeepMind neural network technologies currently being studied, do you believe that the Artificial Intelligence systems of the future will develop their own emotions?

This is a total free for all question. I'm interested to hear in the fors and against. From my research, while it's critical to create AI's that can read and react to human emotion, there seems to be an ethical reasoning against the AI's developing emotions for themselves.

Thanks in advance for everyone's input.

Short answer is yes, it can. Emotions can come from two sources in the human - chemical reactions which can be modelled in an AI or from complex neural bundles, which we will be able to eventually model if there are enough resources pushed into such endeavours. So it all boils down to how you build your AI.

The more interesting question for me is can an AI demonstrate emotions other the human ones? If so, would impact would they have on the Ai and through the AI into the rest of the Universe?
 
The effect of hormones on human states of emotion cannot be over-estimated. It's a challenge enough to try and create an advanced artificial intelligence - but to give it access to authentic emotional states IMO would be a separate and equally difficult development.
 
Apologies, @Brian G Turner , but I can foresee the creation of a program that will emulate hormone participation in AI emotions.

Upon creation an AI would certainly respond "emotionally" in accordance with its programming. But as it is a learning machine, it will slowly develop its own emotions, first through mimicry, then via its unique "personality"... rather like a child! :D
 
all very interesting points. please keep it up.

In response to the input so far:

Based on my research, @Brian G Turner makes a very valid point: our current understanding of human emotion is so limited we do not have the current ability to recreate the same "emotions" in AI. If it does occur, it will happen rather far into the future and may not even be predictable (as @Serendipity pointed out, it could become a by-product of the AI's own thought processes)

In terms of the question of it developing its own emotions, it's difficult to say since we don't have a quantifiable measure of internal "emotions". We can definitely get an AI to respond an in emotionally correct way to human input, but there is no real logistical need, and in fact several arguments against, having the AI feel the emotions it projects for its humans.

I think @Cathbad has the middle ground pretty much. But it's a question of whether, as @Serendipity stated, that is a human emotion or a form of the computer's own emotion.

Ultimately, I think I will use the AI's thought process to create what humanity might consider to be an emotional state but without naming it. That may or may not help the reader engagement through his thinking...
 
Point 1:
whether an AI can emotions on its own. this shows a lack of understanding of what an AI is. There is a lot of magical thinking around AI. we imbue the things with near-mythical powers in science fiction. we are told the holy grail of AI development is a generalised one which can mirror the human neural pathways. or change its own programming, take over the world ect.

Point 2:
Current AI are programmed, they do not evolve naturally. All of their responses, the way they behave. the rules they follow, their limitations. this is all defined before an AI is even turned on.
This does not mean AI cannot evolve naturally. BUT this kind of AI is completely different from anything we have even come close to. it requires technology we have not yet invented, computer processing power more powerful than what we have right now.
It also requires that someone added the ability to evolve and change its own programming. so. yeah.

---------
AI can come in all different shapes and sizes, types and categories. there's definitions of different types floating all over. currently, the mythical AI that can emulate and replicate human neural networking, brain cell firing and so on is the only kind of AI that might be capable of developing emotions. this is a naturally evolving AI, it would be entirely programmed through its own experiences. learning. growing up just like a child. and these experiences would change each AI, so each one would be unique. just like a human.
A naturally evolving AI would be just like humans.

Depending on the hardware such an AI might have capabilities beyond a human mind at present. it might be faster. able to accurately store information without forgetting anything. ect.
Now whether you could ever control such a thing, it depends how much about brain cell and neural networking we understand. we would learn a lot from the first naturally evolved AI and may learn to place limitations. on such a thing. but this would raise moral questions. seeing as it is based on a human brain... it thinks... and behaves like a human brain. would deleting it or changing it be considered murder.
These kinds of questions will be raised when we create a natrually evolving AI system. becuase the first one will be based on a human mind. we might do dogs or other animals before we do a human version.

---------
So I believe that whether or not an AI in the future develops emotions. really depends on whoever is creating the AI. it will never happen without the creator knowing about it or having expected it. To much hard work would go into any system that was to develop emotional contextual responses. for it not to have been the intended outcome.
A calculator does not have the capacity, a chess master has no need, one designed to read faces might understand but thats all it does.

Any AI which develops or changes its own programming will have been designed that way.
Bugs and physical damage to a computer will never cause it either. it's like a single grain of sand somewhere in the universe learning how to jump through random chance despite having no brain, no legs and being a mineral.

So yes, an AI could develop emotions but not without being designed with the capacity, the programming to do it.


-------------
What is the closest we might come to an AI creating emotions on its own without the creator specifically defining it?

Not very far. No bug will do it, no physical damage, classical programed AI won't do it either. not without being told to do it.
Neural networking AI would have the capacity but as they develop like a human anyway. the creator would expect it generally.

-------------
When might an AI do it without the creator wanting it to.

The only way would be if the AI is a neural networking AI that has been intentionally restricted or limited using both hardware and software limitations. the capacity in the base system is still there but say you deleted the neurons responsible for emotion.

In that case, I could see a physically damaged AI brain potentially repairing the deleted neurons.
Such an AI could also naturally re-program its self to use different neurons if say you missed a few or if you did it without understanding the neural network 100%



--------------
Far far far future AI

so say its way way way in the future and we 100% understand how neurons work, we understand how our brains work cell by cell. we learn enough to be able to create a neural network that does not emulate a human brain, it grows on its own.
This future AI would still be affected by its initial rule set and limitations. if it was developed with no limits and just told to do its own thing. who knows.
But if it is a designed AI, developing emotions on its own would be similar to a programmed AI. it might develop and learn but we would understand enough that if we wanted to stop it ever developing emotions we could do so.

------------
A possible exception - kind of
There is like maybe one exception.. say you developed a coffee machine that can recognise faces and create the preferred coffee for that customer.
If you gave the AI of this machine the ability to optimize its own programming to create the perfect coffee based on face recognition and visual cues.
You could see humans making an emotional connection with a specific coffee machine.
This similar to when we anthropomorphize things by naming them, we are in effect viewing the machine as though it had a personality and emotions.

So if emotional development is actually relative to the observer, you might think an AI has developer an emotional response despite a lack of one.
if that makes sense, humans could read into it more than what is beneath the surface. which if a story is told from the outside observers point of view you could give an emotionless AI the appearance of emotions despite it being a programmed response.

A really well programmed AI might seem to have emotions and might appear to develop them as it learns more about a person and finetunes its responses but it is still programming, it is not spontaneously developing emotions.

(I mean that's how I would see people experiencing AI until we figure out brain emulating ones)

TLDR:
Current AI - programmed AI will never develop emotions without being told to.
Neural networked AI will develop emotions because they are human brains emulated which grow that way - so a not unexpected result.

Will this ever happen without the creator realising it - currently no, only an AI with the base capacity for emotional learning could ever develop emotions.

Will this ever happen accidentally - it is possible but only if the AI has that base capacity disabled intentionally, it could be re-activated through random chance, it could be re-activated if the neural network is damaged and self repair fixes the emotional block.

I think a lack of understanding of how programming and AI works is what makes people think an AI could change it's own programming without being told to do so.

RARG eat wall of text.
 
Last edited:
Would an AI want to develop emotions? Would it see them as a useful feature or an unnecessary burden? Why would it want to weigh itself down with them, bearing in mind how much trouble they cause not just the human race but other creatures.
 
Would an AI want to develop emotions? Would it see them as a useful feature or an unnecessary burden? Why would it want to weigh itself down with them, bearing in mind how much trouble they cause not just the human race but other creatures.

My take on this is that emotions in an AI are inevitable. Bear with me:

Nobody knows how to program true sapient AI, and it may in fact be impossible. The only sapient entity we know of - the human brain -probably gets that way because of massive parallelism and many different processes all going on at once, with what might be called the "consciousness process" being unaware of them. How many times in his/her lifetime does someone consciously forget about a problem and have a solution to it pop up, apparently from nowhere, anything from minutes to days later?

In other words, sapience is an emergent phenomenon coming from the brain, which is "designed" for survival.

Now - It's likely true AI will emerge from some massively parallel structure, likely including some aspects of neural networks and machine learning. But to get something like that do develop, there has to be some sort of reward and punishment matrix; success leads to some sort of desirable result, whereas failure leads to undesirable ones. Which leads to drives and emotions. However, neither of these are necessarily going to be analogous to the human equivalents except in very general terms. A survival instinct and a "desire" to grow in effectiveness and maybe even reproduce are very likely, however.

I think that AI is going to look very like A-life. BTW, some programming by humans is going to be needed - but IMHO it will be limited to the basic drives and purposes. AI is going to be no more predictable than a human is.
 
In an effort to progress science, it is almost certain that future techology in relation to AI will have the ability to create a personality and therefore emotion. Whether this will be genuine, self-developed emotion is open to question.
 
Would an AI want to develop emotions? Would it see them as a useful feature or an unnecessary burden?
In Person of Interest, the Creator purposely wrote code that allowed the Machine to learn, to enhance itself. This led to the AI's writing its own code from time to time, developing a quasi-personality of its own, and even modifying the morals standards the Creator instilled in it.

It all seemed a very logical progression for the AI. One could even feel the Machine working out its dilemma in dealing how to react/deal with certain things (especially toward the end).
 
Man, I like thinking about AI so much, I keep end up writing massive text walls about it trying to come up with responses :p it's a hobby of mine.
Fair warning I am no scientist, I have just read alot of wikipedia and watched too many documentaries, and had very long conversations with a programer or two while we worked on our own AI in a game :p

I think when defining your AI. you need to consider a couple of things.
What technologies and science spawned your AI, what is its purpose and would it have the capabilities you want to give it. below are some options based on real-world stuff.
If you are basing it on hard sci-fi, and not using magical I explain it this way because I need it to be this way then consider the following.

Simulated brain: Halo's Cortana, Startrek- Data.
literally that, full physical simulation of atoms within a computer, created only for science, discovery and learning. it must learn and be raised like a child in order to develop. such a thing is a true AI, it will act within the expected norms for a human.
Work best for very realistic AI, these will act and be for all intents and purposes human and may even have parents. A more advanced one might be smarter and faster than humans or have been raised in strange ways making it act a little different than a human. but it should still act in a recognizable way.

Smart Program: -Microsoft Cortana, Skynet, Terminator
This is what everyone who says AI nowdays is actually talking about, deep mind, google smart cars, all that stuff, database analysis, image recognition, multiple programs and algorithms designed for a specific task.
They have limited ability to increase efficiency, to add to databases and use and interpret information. but it is still all restricted by how the system organises its self and the quality of its programming.
For efficiencies sake and to not send programmers insane they are developed for a specific task and do not operate outside these bounds.
works best for coffee machines, ship navigation and personal assistants.

Hybrid AI: - Ash from alien, Bishop from aliens, Robocop.
A hybrid AI may as well be called an imprisoned simulation. we use bits and pieces of simulated information, we simulate chemical responses, electrical impulses. we simulate neurons and human thought processes. we use programming to override specific bits and pieces and to instruct the system how to function.
Such a thing can function similar to a human but has much more in common with a smart program. it is still restricted by programming.
The advantages of a hybrid AI for fiction is that it can break its bonds especially if humans still don't fully understand how every neuron works.
It requires more technical and scientific know-how to make work than a simulated brain. you must understand brains enough to create a full simulated brain to even make a hybrid brain.
Essentially it would be Robocop. like a living brain with chips and technology replacing the parts we do not want.
Hybrid AI works best for stories where you want an AI to exceed or pass beyond its limitations.

Anthropomorphization:
Humans see human traits in many things. just because your AI might be a smart program, a human might still treat it as though it was self-aware, had emotions or that its emotions mattered.
A smart enough program would be able to feed into this to make it seem like it had realistic responses to a human seeing the program in that way.
but internally a smart program is all algorithms, databases and programmed responses.


From a business and why would you do it standpoint:
Smart programs are where it is at. you can control them, they are efficient and they work where and when they are told, you can even own and copyright them because they are not sentient.... yet.

an AI is just like people. why use them when you have a handy cheap meat sack lying around who is fueled by a combination of doughnuts and sunlight. it's expensive and probably just as morally questionable as hybrids.

Hybrids would be failure prone and morally questionable, I mean imagine chopping up a 6 year old's mind, slapping programming in to tell it not to feel emotions or not be able to hear. and you get where I am coming from. and imagine being a scientist and realising something you just created had the mental equivalent of a 6 year olds frontal cortex... and then being asked to chop it up so it stops learning or makes decisions a particular way.
Kinda creepy. it's all fine and dandy experimenting on animal neural mapping but make something similar to a human and your stomach gets a little queasy.
 
Last edited:
That's a funny word, emotion. The first question I would ask is, do dogs have emotions? Cats? Porpoises? Elephants? Are those emotions the same as human emotions or are they observed behaviors (perhaps demonstrable on a brain scan) to which we assign human terms?

Just for argument, I'm going to say that human emotions are precisely that: human. Uniquely.

Or, to put the shoe on the other hand, pretend you're an alien intelligence. Do humans exhibit <alien> emotions? Or do they exhibit observed behaviors to which we assign <alien> terms?

You see my point. I do not believe that computers will exhibit only what is programmed. I mean, they will, but only in the same sense that humans are merely a result of chemistry and physics. The gap between the technical explanation and the observed behavior will be so great that it will be more convenient to speak of AI "emotions" than to describe the programming. So, the AI will have observed behaviors. Will we assign the terms associated with human behavior to them? Almost certainly we will because we anthropomorphize everything.

Will the AI have "real" emotions? I have unasked the question.
 
The problem with emotions is that they can be illogical, which is somethings computers are not good with. A 'random' element can be programmed in, so that 2 plus 2 can sometimes equal 5, but without this can a computer learn to be illogical? Can it learn to love, can it be angry we are late or jealous that we don't give it enough attention? I doubt it.

But then again this is enforcing human emotion onto something which is not human. As has been mentioned, AI emotion should be it's own - so rather than love, hate and fear it has it's set of characteristics that make it unique.
 
Oh an AI could have emotions but they would be simulations of the neurons that fire when a human is in love.
If the AI is of the smart program variety, it would have probably a sort of emotional "status" which would inform its decision making, information handling and other processes.

Smart program AI is told that it is currently "sad and depressed" the program takes longer to perform tasks, it occasionally engages vocal processors to "moan about how depressing life is" and will make other decisions based on its current emotional Status.
It's not hard, but it is a lot of work, emotions are not something a programmed AI could come up with on its own, they are the kind of things you need a helping hand from a human for.

If the smart program was for example... your Navigational voice program >.> and you set it to "Lacking in Direction" as an emotional state, things could, however, get problematic.

as for the illogical nature of emotions... Emotions are often quite logical, only emotions in humans are illogical, we are programmed through experience and long stretches of time. we can learn from an experience but forget the experience and be left with just how we behave as a result of it.
Programs are much simpler. you define what makes the program sad, this makes it sad. easy.
happy and sad at the same time though.... that's quantum computer stuff right there. unless you go deep into maths and raarrrg complexcomplexcompleeex.
 
The first one of these got me interested in AI, the second one is the game they eventually decided to make based off the tech demo. gives me chills. :p

If the right situations occur any sufficiently intelligent and complex program could gain emotions. but it's got to have the framework in place to make that spark take hold I would say. now whether emotions = sentience. that's a different question.
 
What do we mean by emotions exactly? I can imagine a computer having preferences ("I prefer to deal with complex problems than simple ones because I am better suited to complex problems") and being able to imitate emotions if programmed to do so ("I am angry with you, bank robber, because breaking the law is wrong"), but as to specifically irrational feelings - which is surely what emotions are - I find it hard to see why they would arise even on repetition of certain experiences from which humans might draw irrational conclusions ("I have been robbed by three Frenchmen, and so I consider that all Frenchmen are thieves", say). I can't imagine why, for instance, a computer would have a favourite colour unless it was given some kind of pre-made personality which included such things.

(I could see the pre-loaded personality being a good way to go, especially in humanoid robots. It would help with the inevitable weirdness they would display and might help get them out of the uncanny valley. And I could imagine something in that pre-loaded personality that would allow for the development of eccentricities and, from that, apparently emotional responses over sufficient repetition.)

I think there would be a very large jump from thinking "I prefer to work with Bob because he is pleasant and skilled" to "Bob deserves to be happy because he is a nice man". The second implies something more than just operational efficiency, which is surely what a computer would have at its core ("Function well until no longer required").

In terms of writing, I think it doesn't really matter unless a realistic computer is at the absolute core of the story. Many robot stories are allegories to one extent or another or don't require deep analysis to work. But it is something of a mental rabbit hole.
 
You might take a look at this::
https://www.ffri.hr/phil/casopis/content/volume_2/EUJAP_4_tappolet.pdf
::Or Not::
It's your choice.
I think that if you could get your AI to 'care' about things then you might have a possible avenue into this.
Right now the things we create follow a program that dictates what they do and there is really no caring going on. They just do what they do. If they freeze up they don't panic--they just stop and wait for some external force to make corrective action.

Once they start caring they might begin to ponder ways to make sure they aren't interrupted and when they are they might exhibit frustration because they care.

It could care so much about preventing interruption that it begins to care less about quality--which would be bad. Or it could even have a conflict over which it cares for the most and while trying to continue uninterrupted it might conflict when it tries to maintain quality and discovers that it can't do both, because of something that's out of it's control.

Oddly enough I notice my dog has specific cares that override everything and determine how it acts and she has rather inventive ways of expressing her frustration when thwarted.
 
A program will do as it's programming directs, just as a human does. The only real difference between code running on a computer and neurons firing in a soft mushy brain are:

Organic -
Organic processes are not the same as digital, they don't have some mystical grey area, they can still be binary on and off. but Organic processes can fail and keep on working, in organic systems the entire thing is built of redundancies and duplication. our entire fleshy body is built from billions of cloned duplicate cells. if a cell dies it is replaced. if a neuron fails to fire. so what this one over here will. These redundancies mean minute changes can occur which do not impair the function of an organism.

Chemicals-
our brains are chemical factories, they run on a liquid supply of chemicals, oxygen and nutrients, they use huge amounts of energy to run. most of what we eat is used up top. all these chemicals change the way our brains function. and many are not obvious. chemicals don't reach every part of our brain the same way or in the same amounts. and each brain is different which makes it even more confusing. Our brains adapt to the chemicals and nutrients we feed them, the outside stimuli changes the way we think. If our body needs salt. somehow our mind starts to crave salty food.

Emotions-
Humans emotions are hardwired, our brains are evolved specifically to have emotions because they have an evolutionary advantage. emotions help you connect with those around you, they help you get fed when you are young.
If your mother were to die at an extremely young age, you being sad and crying will attract a nurturing instinct in many females and males of our own species even if you are not related. this is evolutions way of ensuring our species survival in hard times.
They are tools for survival, we put emotions on a pedestal and try to say it puts us above other species or makes us special. all emotions are is yet another evolutionary trait that keeps our species alive.
Many animals exhibit similar behaviours, however, we are an extremely social and communicative animal and so our emotions could be said to be very complex.

Now, AI. They are created programs, they do not have any evolutionary factors applying to them other than those caused by human hands. To have any of those properties of an organic brain which can evolve and change through each successive generation an AI must have evolutionary pressures which subject it to survival or death. it must have a capacity for change and older generations which have not changed must die to be replaced by newer generations.

The only evolutionary factor affecting programs is humanity, we create, we upgrade and we destroy.

If you want an AI with emotions then humans have to give it the capacity to either have them, in the code. or evolve them in order to survive. BUT. humans would still have to program an ai TO evolve.
You would have to create a program that mishmashes random code together and repeatedly runs it. it would have to be programmed to mutate like a cancer, 99.99999999999% of the time the code would not even run, and the bigger the amount of code you try to run the more chance there will be an error. it is the whole 1000 monkeys writing a novel chained to typewriters. it would take billions of years and might come up with "hello world"

The closest we have come is this Microsoft AI can write its own code by stealing from others | WIRED UK And all it can do is steal stuff we already came up with. it would take billions of years to come up with anything useful unless closely guided by a human.

It is easier to program in emotions than create a system where it can evolve its own, and if you want it to evolve its own you still have to tell it how.
Which according to the OP's question about it developing emotions on it's own. simple answer is no.
An AI will never develop emotions without:
1. close emulation of human systems
2. being built with the ability to evolve emotions on purpose.

it will never spontaneously evolve or gain emotions on its own. it is an artificial creation. it must be given the ability to evolve in order to be able to evolve.

Life is like that. if you can't evolve you die, our programming tells us to evolve and so we do. an ai is the same. it must be told to evolve.
 

Similar threads


Back
Top