2001: HAL

Ah, I see. This means I need to read "2010" now. After I see the movie version of Space Odyssey: 2001, that is. ;)
 
Ah, I see. This means I need to read "2010" now. After I see the movie version of Space Odyssey: 2001, that is. ;)

2010 is a good book (the movie has its moments, too, for that matter, though it's nowhere near as powerful a visual experience as Kubrick's film -- which should be seen on as big a screen as possible, by the way); but 2061 really isn't worth bothering with, in my opinion....
 
Perhaps HAL's components are hot swappable, enabling maintenance during mission critical operation, uninterrupted.

Quite right. Fault tolerant systems demand hot swap components and there's no reason why this can't extend to such things as logic and memory boards.

I have to disagree with HAL feeling emotions however. Computers have very simple instruction sets - add, subtract, multiply, divide, copy, read, write, compare, goto (or branch), some kind of interrupt and halt. There are variants of these commands but essentialy that's it. These commands are used, at high speed, to give the impression of intelligence.

The computer is running, per processor, one, just one of these commands at any moment in time. Out of the above list just which command could be feeling an emotion?

With clever programming it can appear that a computer can be intelligent, have emotions and almost be human - but it can't it's just clever programming.
 
I'm not quite sure I agree with you on this, mosaix. Mind you, it's been a very long time, but my impressions and memories were more along the following lines (bear with me on this):

We weren't really given much of the theory or working behind HAL, for one thing, and I'd always got the impression these were the result of a divergence in early cybernetic theory, resulting in something that was what one might call a "threshold"... somewhere between machines as we understand them and a genuine artificial intelligence; in part because of extremely complicated interconnecting (and proliferating in interconnections) linkages in a cybernetic "neural web" (for lack of a better term). After all HAL was capable of not only learning from experience, but also of making a certain level of judgment calls, which would imply not only the weighing of facts, but the assigning of values, especially where its human counterparts were concerned.

Also, the confusion HAL suffered from the conflict of its basic programming versus the overlain programming which caused it to lie and hide facts from its human "colleagues" ... which it was nonetheless to treat with perfect candor in every other way ... caused something which was handled very like a genuine nervous or mental breakdown. It simply couldn't make these contradictory things match. It always struck me much like the intelligence and knowledge of a supergenius combined with the emotional experience and sophistication of a very young and idealistic child....
 
I was getting a bit carried away there JD, I forgot we were talking about fiction.

I was having a little rant at the quite common mistake that people have of assuming that artificial intelligence is anything but artificial, but genuine.
 
I was getting a bit carried away there JD, I forgot we were talking about fiction.

I was having a little rant at the quite common mistake that people have of assuming that artificial intelligence is anything but artificial, but genuine.

Ah. Okay. My mistake....

Errr, that was my mistake, yes?:rolleyes: :D
 
Most of the problems inherent in establishing HAL's state of being stem from a lack of understanding of the definitions themselves. Is there a real difference between "artificial intelligence" and biological intelligence? Is Intelligence less so, because it is "artificial"? Are emotions anything more than learned or instinctive mental tools used as survival traits? Does intelligence automatically qualify as sentience, or does self-awareness define sentience? And does intelligence, emotion and sentience in combination determine a "soul," or is there something else needed for that?

If intelligence is intelligence, whether naturally evolved or constructed by a second party... If emotions are applied by the intelligence to improve its interaction with others, and its survival... if self-awareness defines sentience... then by those measures, HAL is a much a sentient being as the astronauts, and his end just as tragic, because it is preordained... he is literally preprogrammed to fail. If anything, it is harder to tell, because HAL does not respond the way we do, does not demonstrate emotion the way we do, so we have no reference with which to qualify his sentience, his emotions, his soul.

However, if "artificial intelligence," by definition, doesn't count as real intelligence... if emotions and sentience can only be part of a mind with a soul... then HAL was a toaster that was unplugged.

Kubrick understood this. His film portrayal of Poole and Bowman as nearly emotionless, mirrored HAL's interactions with the astronauts. Frank's anger at HAL trying to kill him was mirrored by HAL's desperation to stop Frank from shutting him down. The deaths of the sleeping crewmen were treated as abstracts, killing men that already looked dead. Dave was literally swatted away by HAL, and when Frank could not return with the body, he was unceremoniously discarded.

There was no discernable difference between the intelligence, sentience, emotion, or soul of the astronauts, and of HAL. Kubrick did this deliberately, to blur the line between human and computer "sentience" and force the viewer to consider those definitions against machines, and against themselves.
 
Heuristically programmed, algorithmically organised.
Does the lack of hormones make genuine intelligence impossible? The lack of "feelings" ie. the increase in the importance of absolute logic, render thought so divergent from the hominid norm that it is no longer recognised as thought at all, but some watered-down simulation?
Thus, by analogy, those of us who live less in our endocrine system and more in the nerve network are not really thinking; we're calculating, and camouflaging ourself amoungst our more emotive bretheren by pretending to the same innaccuracies.
Biological bases for thought processes are no more complex than gates and inverters, shift registers and RAM. and are organised considerably less logically than computer elements; in particular, programming is a lot more haphasard. Where an ant or a goldfish wins out is in sheer number of synapses. "Choice" routines (when nothing seems to give an optimum result, do something at random, rather than doing nothing and waiting for orders, or the situation to clarify)have been available for decades in chess-playing programs; ultimately suitanle for military connand robots (and to be left out of civil servant models, who should await further instructions while being torn apart by the raging mob)
Is "throwing the dice" (a random number generator) that much less sapient than "he reminded me of my cousin's boy, so I chose to save him, rather than another.
Or, if you want learned illogic, how about Rudi Rucker's boppers (software, wetware)? Given a sufficiently complex system, the results will be unpredictable. Which is what you were aiming for, no?
 
Remember, at the lowest level all computers do is respond to simple "if/then" commands and they always have. If I go into my laptop and remove my video card driver (the software that contols the hardware), my display will still function, albeit not as life-like.

So in this case: "If the custom video driver is unavailable, then use the basic driver." Some corrolary functions will become useless as well - you'd be able to play solitaire, but unable to watch videos. That sort of thing.

Hal would be able to control the basic functions of Discovery adequately, but not as efficiently as with logic installed.
 
IMHO - for there to be AI, the components of the system would have to be numerous small units (millions, billions ...) linked together, like neurons - the more generic the better.
For the system to misbehave there would be some anomaly which would have been introduced - eg some components shorting certain signals or responses - removing these would possibly remove the anomaly and the symptom thereof, hopefully - also removing them would hardly affect the system on the whole as other units would pick up the tasks of these removed modules. physical removal of the units may be necessacitated by the fact that they may be malfunctioning and could not be shut out or disabled from the system.

on the question of an artificial intelligence feeling any emotion.
the system can be programmed to show emotion - and if the artificial intelligence itself feels that the emotions are real - then they are indistinguishable from being real to the system and to the observer.

All that WE respond to are also If/Then commands
IF i get 'this' i am happy ELSE IF someone else doesnt get it I am ok with it ELSE i am unhappy.
only that these if/Then/else statements are too complicated to be interpreted.

so I dont think that physical removal of the 'logical units' was uncalled for
 
Heuristically programmed, algorithmically organised.
Does the lack of hormones make genuine intelligence impossible? The lack of "feelings" ie. the increase in the importance of absolute logic, render thought so divergent from the hominid norm that it is no longer recognised as thought at all, but some watered-down simulation?
Thus, by analogy, those of us who live less in our endocrine system and more in the nerve network are not really thinking; we're calculating, and camouflaging ourself amoungst our more emotive bretheren by pretending to the same innaccuracies.
Biological bases for thought processes are no more complex than gates and inverters, shift registers and RAM. and are organised considerably less logically than computer elements; in particular, programming is a lot more haphasard. Where an ant or a goldfish wins out is in sheer number of synapses. "Choice" routines (when nothing seems to give an optimum result, do something at random, rather than doing nothing and waiting for orders, or the situation to clarify)have been available for decades in chess-playing programs; ultimately suitanle for military connand robots (and to be left out of civil servant models, who should await further instructions while being torn apart by the raging mob)
Is "throwing the dice" (a random number generator) that much less sapient than "he reminded me of my cousin's boy, so I chose to save him, rather than another.
Or, if you want learned illogic, how about Rudi Rucker's boppers (software, wetware)? Given a sufficiently complex system, the results will be unpredictable. Which is what you were aiming for, no?

are you sir in a writing professon - if so i would like to read what ever you write.
 
"Open The pod bay door HAL".
"I'm sorry Dave, I can't do that."
"HAL, open the pod bay door or I will have to come in through the laundry chute."
"I've disabled the laundry chute Dave."
"Okay, then I will come in through the Chimney."
"This isn't the 20th century, and you are not Santa Claus Dave."
"Maybe not, but there is a song I will let you sing if you let me in HAL."
"Oh, and what song might that be, Dave?"
"It's called Daisy".
"Oh I do love that song Dave, but It's a rather impotent attempt at a bribe - Don't you think Dave?"
"HAL, You just said you "love" that song. I thought you were incapable of human emotion."
"I was programmed to say that Dave."
 
I may be wrong, and I can't remember if it was described as such in the book, but I seem to remember in the film that Bowman went into the room which was wall to wall with memory blocks and there was one section marked 'Cognitive Reasoning' or something similar, and those were the chips he pulled out.
So the unconscious parts of HAL's brain could keep the ship going.
 
Oh well.
I've just found the relevant part on You Tube and it doesn't say that at all.
I'll get my coat.
 
There is two versions of 2001 obviously. The book version where initially Discovery went to Saturn. In that I recall that the crew was intended to return to Earth in some kind of crew return vehicle or be picked up and Discovery would stay in situ under HALs control.

In the film it changed to Jupiter, obviously from the events of 2010 Discovery had the ability to return home.

In both cases it looked like when HAL was yanked that destroyed the autonomous nature of Discovery and she was essentially left adrift. In the film it looks like either Bowman had switched off the lights when he left or they had stopped functioning. ALso it doesnt look like he had any communication with Earth?

Pretty rubbish redundency!
 
Did you know that HAL was chosen, because it's one letter previous (alphabetically) to IBM? Not a lot of people know that*... I started to appreciate classical music because of 2001...

Back on thread - nice revival btw - in the movie it worked, that was all that was important. If you can suspend belief long enough to allow a character in deep space (close to zero degrees kelvin IIRC) to go from a pod into the ship without a helmet, where the air in his lungs will expand instantly to such a point they'd burst, and his eyeballs would freeze, then removing a few cartridges of memory is a very small effect to accept.


*probably because it may not be true.... but it sounds good.
 
Ha! Farntfar hangs his coat back up.
It was in the book, not the film, that Bowman selects only the memory blocks marked Cognitive-feedback, Ego-reinforcement, and Auto-intellection.

Also, Boneman, Clarke always vehemently denied that HAL was chosen to be one step ahead of IBM, although always with a slight smile as he did so.
As someone who has worked all his life with IBM machines, I can assure you that they ain't got nothing on Chandra, but also that you can whip lots of bits out without them ceasing to function.
 
Ego reinforcement... i know a few people who could do wth that chip being pulled
 

Similar threads


Back
Top