Programming Languages

Moonbat

Chuckle Churner
Joined
Jul 18, 2007
Messages
2,237
Location
Devon
After reading an article in this month's New Scientist a few thoughts have popped in to my silly little head about Programming Languages.

Is it possible to have an infallible programming language? One that can not only always perform the tasks is was designed to do, but also one that cannot be misunderstood. If we look at natural languages, like English, it has evolved over hundred if not thousands of years, but still in conversations where the concepts discussed are detailed, intangible and ultimately quite complex we often ask for clarification. Could a language based on the logical functions of microchips ever surpass this level of universal understanding?

Programming languages (at least high level ones) use key words to express ideas that can be linked together in chains to form complex functions, much like a sentence can be formed from words to convey a complex idea, but often with sentences the economy of word use is directly related to the speakers/writers vocabulary, if I'm talking to someone and they use a word I don't understand I may ask for a meaning and they will use more words to describe that single word to me, in this sense where a microchip can only perform something like (really not sure on the number here) 8 to 16 actual functions (maybe less) the key words used in programming languages are made up of several (if not hundreds) of these functions in a particular order.
Programmers pride themselves on efficient code, in the same way that writers or linguists pride themselves on efficient word use and sentence structure (I haven't really mentioned the syntax of languages that is vitally important and relevant to the discussion). Could a programming language ever reach the level of a natural one?

All these thoughts bring me onto the idea of sentience, although it is not known yet how/what it really is, the idea of AI sentience and the philosophical paradox of Sorites. This has made me wonder (partially for a WIP) about the line that could (in the future) be drawn between AI sentience and just plain computing power.

If we say that to take away a single Quotient (IQ) of my brain power would not make me non sentient, but if we removed all of my brain power in increments eventually I would have dropped below (the very interesting link from Ursa in a different thread) the idiot threshold (should that have two h's) that makes me non-sentient (or indeed lower). I'm interested in how this would relate to AI sentience as it wouldn't be too great a leap to assume that the sentience of future AI's would and could be measured in terms of MHz or THz or whatever/teraflops/GB of ram (obviously ignore the wildly different sizes of those and assume they are relevant to the sentient AI's of the future). If I take 1 single flop from my sentient AI's mind, it wouldn't be considered not sentient, but at some point (if I kept removing flops) it would descend into regular AI/computer status.

Anyhoo, I wonder if you guys/gals have any thought on the matter (assuming I haven't dulled you into a state of moronic non-sentience)
 
I'm not sure I can comment on the IQ stuff but I can comment on the programming language stuff. The first thing to appreciate is, unlike natural languages, programming languages are absolutely deterministic. Any lack of determinism comes from our own misunderstanding. And there's your problem, natural language is almost deliberately non-deterministic; I don't think a programming language would ever become comparable to a natural language. Though possibly some of the so called AI languages, like Lisp and Prolog, might; I'm not expert on them.
 
There's another, deeper, point here. The point is the one made in the Turing Machine halting problem and the Godel incompleteness theorem, which apparently (the maths is way beyond me) come to the same thing. That is; it can be proved that in any formal system (which certainly includes programming languages) there are statements whose truth cannot be determined. The Halting Problem version is that there exist programs of which it cannot be said whether they ever terminate or not.
 
I'd argue that there are high level programming languages which are (or can be) non-deterministic. Even languages used for safety critical applications (such as Ada) can be written in ways that allow non-determinism.

Your basic problem with programming languages is that of comprehension. As all good programming courses will teach, the biggest cost of errors in software development occur at the requirements stage. There are two parties, those who know the problem and those who know software. The difficulty is getting comprehension of the problem domain into the software. The more formal the programming language the more difficult for the domain expert to verify it (and vice-versa).
 
If I take 1 single flop from my sentient AI's mind, it wouldn't be considered not sentient, but at some point (if I kept removing flops) it would descend into regular AI/computer status.

Very interesting thoughts Moonbat. I think the stumbling issue for me here is the question 'What is sentience?' (Well there were a few others, but I suppose this was the biggy :))

All computers can be theoretically represented as a Turing machine, and it can be shown, as Mirannan has mentioned, that there are very deep concepts that apparently we humans are capable of getting insight into, that a Turing machine (hence current AI) will never ever be capable of. (I thoroughly recommend The Emperor's New Mind by the Mathematician Roger Penrose that puts forward this argument - very readable.)

Does this mean that we are sentient and that computer must alway not be? Maybe, or is there perhaps different levels of sentience. I mean after all, I believe a cat is sentient - but it would score zero in an IQ test.

But then we say, we should design a test that all 'sentient' beings can have a go at (I believe attempts have been made). However we run into the fundamental problem of how biased such a test would be - as surely we humans, the designers of such a test are applying just our standards and beliefs.



My current stance is that at all attempts to make artificial intelligence will fail on a deep level. Firstly because we ourselves can't define the basic terms, and have little understanding how our own consciouness and thoughts works and secondly because this deep flaw as described above shows that current computers just aren't the same as us.

At best we will be able to mimic sentience and intelligence - but in my mind mimicking something is not the same as the actual thing. (We're getting into "I think, therefore I am" territory now - yet another intractable problem :))

I'm not hardline in this view - there's a lot of discoveries to be made on so many fronts and perhaps a solution in favour of AI will solve the Turing halting/Godel theorem issue. Who knows what the future will bring.
 
Ah, Searle - had to do his stuff in philosophy classes at Uni.

There is the mathematical argument that anything that is indistinguishable from something else is logically that something else. Or "if it quacks like a duck, walks like a duck, and tastes like duck l'orange when cooked, then it's a duck"

After all, the argument goes, I can't actually prove that other people have the same experience of consciousness that I have, but I am happy to ascribe consciousness to them. If a machine demonstrates all the visible characteristics of intelligence and consciousness that other humans do, why should I make a different assumption for it that "it's not really conscious it's just faking it" than I do for humans?
 
After all, the argument goes, I can't actually prove that other people have the same experience of consciousness that I have, but I am happy to ascribe consciousness to them.

Hence my "I think therefore I am" quote.

On one level I am (happy to ascribe people and other beings with a mind like mine), but on some fundamental level I can't - hence

If a machine demonstrates all the visible characteristics of intelligence and consciousness that other humans do, why should I make a different assumption for it that "it's not really conscious it's just faking it" than I do for humans?

I can :)

And part of my 'they are different' argument is that on some deep level there appears to be something quantifiable that sets us (or just me if I'm in a deep Descarte moment :p) apart.

However to repeat, knowledge in the future may show this difference to be illusionary or wrong, and I fully admit it's a very subtle nuance. So I'm open to changing my view!
 
Moonbat, programming languages are so called formal languages. They differ from natural languages in that they consist of mathematically precise instructions that have to do with the computer's operations. It doesn't concern itself with any kind of communication that is not directly, literally in the code. Well, there is context of sorts, in variable scopes, but even that is a precisely defined concept (unlike context in natural languages).

As others have pointed out here, current computers operate totally differently than human brains. Ultimately, they are just machines, following every instruction exactly as told. It is no different than a car you drive, in that regard, as a car also acts as it is "told" by the input from the driver.

There just is no awareness like that of the human brain there. What few things it can do, however, it does both much faster than human and with no risk of error (unless the programmer made a mistake, which is also human error). Complex programs with multiple direct instructions can emulate some aspects of human behavior, but it is never going to be the same. At least until we make some leap in our understanding of human intelligence (or even animal intelligence, of which we are the most advanced example, in many ways), and can make a new kind of computer with those findings, if (and that is a major IF) that ever happens.

This does seem like stating the obvious, but never forget that programming languages are ultimately created for instructions to a machine. Even though high level languages are more human friendly, this is because it abstracts away from low level concerns such as machine specific architecture and lets compilers handle the last part(s), and creates a better overview of the programs you make than the low level spagetti code nightmare. They don't truly come any closer to human ways of dealing with the problems it is given.
 

Similar threads


Back
Top