Artificial Intelligence - A Discussion Thread.

I'll avoid the most obvious questions - which would likely divert us into an area where things would be said that would get the thread closed - and simply ask: Why should the the brain be designed this way? For what purpose?

Hmm yes, could get into dangerous ground there :eek:

RE Mosaic's complaints about computers being little more than complex adding machines - I agree completely and firmly believe we will never achieve this level of sentience/sapience with this traditional sequential computer design. No matter how complex it becomes. Apart from anything else I do not believe that any such intelligence could work on a foundation of absolute truth and falsity(?). I believe any such intelligence would have to have an element of "fuzziness" to it and is more likely to be based on an architecture more like a neural network. In other words an electronic model of how our own brains work.

I find it a little strange that so many people can accept the idea of FTL, which breaks the fundamental principles of physics (as we currently understand it), and yet have such trouble with AI which I firmly believe is "merely" :)rolleyes:) a question of achieving the necessary level of complexity and processing power.
 
The human brain is not designed; it evolved. It purpose is to enhance the survivability of humans.
I want the thread (which is about AI intelligence) to stay on track and so I will leave your statement hanging.

I can only hope other contributors will do likewise.


(Threads that have headed off on a similar tangent have ended up being closed in circumstances of some acrimony.)
 
If you're going to decide that a proven fact is not true, then what other facts are you going to arbitrary decide are not true? There is no point in continuing this discussion if you don't accept science.
 
I'll say this as calmly as I can (as I hope I did, eventually and after much editing, in my response to Tinsel's post):




Perhaps you should consider whether:
  1. you want this thread to examine aspects of artificial intelligence (including the possibility of artificial sapience) in line with the original post; OR
  2. you want this thread to descend into a lot of arguments about religion (which is more than likely, given that it has happened on more than one occasion).
I think most folk posting here would prefer the former rather than the latter, if only because arguments about religion soon reach stalemate.


I think the one point at which to the two might intersect - and so has to be dealt with with care and diplomacy** - is emergence, i.e. the idea that the complexity of AI systems (or any other information handling systems) could permit the self-development of sapience and consciousness. Even then, I would rather not discuss this particular point through proxy arguments about religion (which are, by their very nature, not applicable to AI sapience as we might first encounter it).



** - Which is what I have been trying do do here, but with little success, it seems.
 
I agree with you completely Ursa and think you have been very diplomatic - I would hate this discussion to degenerate into a religious wrangle. It does unfortunately, as you suggest, touch on that in the area of emergence of sapience. However as you so rightly say if we go there it will almost certainly end in stalemate and probably acrimony and that would be a shame.

To maybe turn it away from that area; I'm not sure that the question is actually crucial (design or evolve). Maybe the only reason we struggle to fully understand the mind is that we would really need a mind of greater processing ability in order to do so. How can brain X ever hope to fully understand brain X. It's like a mirror reflecting itself. Or looking at a photo of yourself looking at a photo of yourself looking at a photo of yourself....

Either way can we model it? Can we create an artificial equivalent capable of the same level of processing? If so, why not one with greater processing capability and capacity? Not necessarily greater understanding but rather giving it the greater capacity so it would possibly be capable of understanding. Afterall there is stuff from modern physices being modelled on computers now that even the top physicists say the human mind will never really be able to fully grasp; only model it mathematically.

Sure it's not going to happen today, not tomorrow, maybe not for a couple centuries but I personally believe we will have that capability sooner or later.
 
If you're going to talk about artificial intelligence, a technology, then you're going to have to accept the science it is based on as true.

As for this not being a religious discussion, it's too late. You turned into one when you catered to their extortion and attempted to censor me because I believe in science. I will not tolerate anyone shoving their religious views down my throat and I will not tolerate anyone who helps them. By saying we shouldn't state anything that might upset them, you having taken their side and stifle anyone with different views.

You have no idea how hurt and upset I am. But if you insist that the one who kicks up the most fuss is right, then I'll start doing so.
 
Me - Anonymous
I think that I shall never see a calculator made like me,
A me that likes martinies dry and on the rocks, a little rye.
A me that looks at girls and such, but mostly girls, and very much.
A me that wears an overcoat and likes a risky anecdote.
A me that taps a foot and grins whenever Dixieland begins.
They make computers for a fee, but only moms can make a me.
Indicating that what I am going to say is perhaps not 100% serious.

But also indicating that I read this poem in the late fifties or very early sixties, (in a 'magazine of fantasy and science fiction) and my spotty and unreliable memory threw up enough consecutive words that, when I Google searched it every single hit contained the information I wanted. While I can't remember the name of the guy who's coming in tomorrow to record. Some kind of 'forgetory' is essential so that calculating power is not overwhelmed by available data (preferably a touch more effective than mine).

With 'Multivac', the 'vac' at the end did not signify it ran on vacuum tubes (thermionic valves, for those from this side of the pond); the 'ac' meant 'analogue computer' (which also required air conditioning). Perhaps, if we want to downgrade the perfect arithmetical functions (well, near perfect. It can be shown that, by quantum effects that any sufficiently complex system, like the Bell telephone network, will have a certain, irreduceable number of wrong numbers) we could try an analogue front end, preparing and distorting the incoming information to the 'pure intellect' arithmetic crunchers.

But that wouldn't change the 'multiprocessor, running at different speeds with different thresholds' biological logic engine. A computer is optimised, so a microprocessor with about the connectivity of an ant brain can calculate the orbits of stars. The ant has too many other things to do with its nervous system, and runs bypass autonomic subsystems to reduce the load on its central processor. Basically if the ant were rationalised it could run considerably more efficiently, and the same is true of us (We are even more layered with unnecessary and inefficient leftovers) But is this spare capacity, evidenced by a few eidetic memories and idiot savants, wastage, or the very reserve that makes imagination and intuition possible?
 
I think that it is quite possible that the operation of the brain is so complex (and so dependent on the tiniest changes in billions of neurons, changes that perhaps cannot be measured without affecting that being observed), that we will never know exactly what is happening. This doesn't mean we have to veer towards the realms of fantasy.


I'll avoid the most obvious questions - which would likely divert us into an area where things would be said that would get the thread closed - and simply ask: Why should the the brain be designed this way? For what purpose?

The network of paths in the brain would be too complicated to analyze unless a computer was involved and a computer is just a tool or extension of the human brain. What is used to traverse the brain? Is it electrical current? If so, what causes the current to take one path as opposed to another path in the network. It must have something to do with instructions. How are the instructions created?

I think that it is possible to analyze part of the brain, but there are other parts of the brain that as I said are not necessarily possible to analyze because the brain could be designed to prevent itself from being analyzed. One side can shut down the other side or else silence it.

It is just a guess. I might have some basis for thinking that way. It is the part where instructions exist that is mysterious. That is just what I think. I'm probably not in the best mood to think about this stuff today, but yes, the other issue, the complexity involved surely appears to be unmanageable unless it can become self propagated.
 
...complexity involved surely appears to be unmanageable unless it can become self propagated.

I think that's absolutely right which is why I belive we could never hope to "design" a system capable of sapience. But we might just be able to design an architecture or structure, if you will, that might be capable of achieving that. So we provide a framework but the connections within that framework are made internally in reponse to external stimulus. In other words learning. Such a system just might be capable of achieving sentience and eventually maybe sapience.

However as Chrispen so rightly points out we have yet to create a system much more complex than that of an ant in reality. I don't know though is that true? I think we might have gone a little further than that though I don't know how many connections are estimated to exist in an ant's brain. Irregardless there is still a long way to go. By the way Chrispen I loved the poem/verse...very apt :D
 
I think that's absolutely right which is why I belive we could never hope to "design" a system capable of sapience. But we might just be able to design an architecture or structure, if you will, that might be capable of achieving that. So we provide a framework but the connections within that framework are made internally in reponse to external stimulus. In other words learning. Such a system just might be capable of achieving sentience and eventually maybe sapience.

However as Chrispen so rightly points out we have yet to create a system much more complex than that of an ant in reality. I don't know though is that true? I think we might have gone a little further than that though I don't know how many connections are estimated to exist in an ant's brain. Irregardless there is still a long way to go. By the way Chrispen I loved the poem/verse...very apt :D

Maybe that is fairly accurate. The part where you said that there has to be connections within the brain that respond to external stimulus. I would not count that out in fact it is certainly logical. I wouldn't call anything a framework but more like a network or graph. Yet if we knew how to build something functional you could implement a framework design probably, lol.

No, we have not reached ant status because everything has just been an extension of mankind, so we have only built tools. Okay I guess there is the field of biological science involving cloning and genetics, right. Than you are taking a different approach, you are working top down with organic materials rather than bottom up using non living materials. I'd like to see something done with physical laws and non living materials. Where is the connection. If a human brain can be modeled in software, and the functions of the brain, the behavior, can be implemented in functions or methods, than the body although not organic can be simulated after an organism. If we know what happens to an organism, than we can copy the results and simulate them using a robot, than we are free from organic materials, but still possibly dependent upon humanity since the robot is a simulation. Somewhere along that line, there might be some purpose for designing artificial life such as an advanced symbiotic relationship.

Of course that would change the world and it could raise humanity, and it might answer many philosophical questions.
 
Last edited:
I actually avoided the word network as I didn't want to tie the concept down that precisely but I suspect some sort of neural network would be the most likely "framework". However we do seem to agree that whatever it is, must develop itself rather than be created complete, so to speak.

An AI created with inorganic components is certainly what I started out thinking of on this and the other AI thread - however I suspect that some sort of hybrid is much more likely with the development of organic "electronic" components (already being researched). However I think there is a distinction here in that these are not necessarily cell based organism - ie living and developing and needing feeding. I may be wrong but I don't believe the organic components being researched are "living cells".

Bottom line though is that I think they will need to "develop" their own intelligence.

With regard to some sort of symbiotic system, I do think that is equally possible, maybe more so, and of course, as you state, all sorts of interesting philosophical discussions there. Certainly many authors have explored the idea of augmented humans but I feel that is a separate discussion one would argue there that the sapience is still solely coming from the human (organic) part.
 
It has been shown that when we learn something new, new connections are grown within the brain. Huge parts of the brain are practically unused. If part of the brain is damaged and the person is young enough, other parts of the brain can take over the same function as that damaged, and new connections are made to these parts. What I am saying is that the size of the brain is not important, undoubtedly it is the connections (the complexity of the network) that determines how smart we are. I would assume the same to be true of any AI.
 
I'm not sure if a human being is able to create a life form but they can manipulate the structure at an early stage of life. Basically what might be worth doing is to create an artificial human being in order to analyze humanity since a human being can not, live forever, or it can not realize what is possible to do without knowing/seeing how to do it, because the brain could be designed to prevent itself from being analyzed.

I did read something about the neural network of the brain being able to change dynamically. Yes I knew about that too.

Oh and the framework. If you meant that as a structure than I know that it has been implemented in libraries and that it is the best structure for organizing object oriented programs.
 
In conclusion of my view on the subject, and yes, I suppose that a framework is a more generic term and these other things are known as abstract data types, but what I believe that is significant is including that external forces act upon a living organism and than understanding how those forces affect the body because I believe that they are factors that shape the mind and or reality.

The most difficult stage in artificial intelligence is moving beyond the confines of our view(s) as humans, so therefore the goal might become achieving human transcendence, and than followed by creating new life forms. How does a human being navigate?

You know, it is a complicated task when you begin to find answers, so we could revert to apedom as a solution, especially if the external environment is critical but I believe that we will move on forward in spite of this.
 
Dave I agree that the complexity of the "network" - the connections - must surely be the key and I also believe that anything of that order of complexity would have to grow (I don't mean organically but set up new connections) in response to external stimulus exactly as we do. I don't believe anyone or any computer could ever design a system of that sort of level of complexity. However I do hold that we could create an analogy of the human brain - a sort of blank network - that could develop in exactly that way. I'm no expert on this but I believe that is exactly what some of the research in robotic system that "learn" about their environment are currently doing.

Bottom line is that I reckon as such systems become ever more complex (or maybe I should say capable of ever more complexity) they will eventually reach the point of being self-aware and presto you have a sapient AI.

I take your point about size but ultimately the human bain is limited by size (at least its current size, but let's not go there) however an AI would not necessarily have the same limitations.

Another thought is that even supposing we did managed to create such an AI, that is maybe more "intelligent" that us. I'm not sure it would necessarily be faster. We often assume that because a computer can process specific (computational) tasks much faster than us, an AI would inevitably be much faster too. However I don't know but I suspect that our brains are probably just as fast as the fastest computer if not faster. It is just that every "thought" has to "traverse" an unimaginably huge number of connections and that takes time. An AI as suggested here would have the same problem and so would be likely to be just as "slow" as us. That kind of makes me feel better :)
 
However I do hold that we could create an analogy of the human brain

I think I said in the other thread that I think the key to this is our understanding of DNA / RNA. Here we have a mechanism that can understand a plan and build an entity from it. If we could understand how the plan works, insert our own and just let the mechanism get on with it.

We could build in all sorts of characteristics:

Resistance to radiation
Ability to live in a vacuum or under water
Improved eyesight (X-Ray vision?)
Improved processing power / memory
Interfaces to other entities / equipment
Etc., etc.
 
This is true Mosaix and it may well be the direction things will go - we are extremely close to designing our own cells from scratch (rather than just genetically modifying existing ones). However I suspect there would be a lot more ethical outcry at the idea of a biological AI as compared to an electronic one. Really no good reason for that but I still suspect it would be the case.
 

Similar threads


Back
Top