Using Human History as a guide Could Our Present Civilization Fall Into a New Dark Age?

WRT the AI thing, count me among those who are unafraid of the future. Doomsday is always possible, rarely likely, and has in fact never happened. Hopefulness is more complex and, to me, more interesting.

But SilentRoamer's excellent post sparked a notion. I can see a short story in which there was one or more generalized AIs running around in a gee-whiz sort of future when a calamity happens. And a Dark Age occurs (isn't it goofy we always make that plural?) but it's not for the humans or at least not only for them. It's the AIs that fall into an AI version of a Dark Age, complete with lamentations over what had been lost.
 
WRT the AI thing, count me among those who are unafraid of the future. Doomsday is always possible, rarely likely, and has in fact never happened. Hopefulness is more complex and, to me, more interesting.

But SilentRoamer's excellent post sparked a notion. I can see a short story in which there was one or more generalized AIs running around in a gee-whiz sort of future when a calamity happens. And a Dark Age occurs (isn't it goofy we always make that plural?) but it's not for the humans or at least not only for them. It's the AIs that fall into an AI version of a Dark Age, complete with lamentations over what had been lost.

I think you might be on to something. Probably better get writing before someone else grabs it and goes.
 
One thing one needs to keep in mind when talking about AI is that a computer does not and cannot 'think' as humans do. It's not really intelligent.

A computer CPU executes a very large number of mathematical calculations - or rather, a simulation of mathematical calculations - in binary code. No matter how powerful the CPU, or no matter how many CPUs are linked together, the computer or computer always remains at the level of simulated mathematics. It cannot rise one milllimetre towards true thinking.

True thinking means grasping abstract concepts. We examine a number of diverse objects and extract from them something non-material (and non-mathematical) that they have in common. So after looking at a collection of green living things, we abstract the concept of 'tree'. These things that physically may look quite dissimilar all have something in common - a nature, itself not reductible to physical phenomena. They are trees.

With the exception of names and proper nouns (and not even them really), every word in English expresses an abstract concept, something that itself is not physical but is possessed in common by physical entities. Abstract concepts extend to every part of our understanding of the universe: 'beautiful, 'good', 'evil', 'useful', 'expendable', and so on. A computer does not begin to comprehend them. It just performs mechanical simulations of mathematical calculations. It doesn't even understand the maths it does. We understand the truth behind the affirmation that 2 + 2 = 4. A computer is just programmed to produce a mechanical simulation of that calculation.

Since computers can't think they can't make decisions based on thinking. They can't, for example, conclude that the human race is a blot on creation and decide to exterminate it. They can't actually make decisions at all. They have no free will. Their 'decisions' are simply the end result of preprogrammed calculations. If they get things wrong, blame the humans that programmed them. They're just tools really.
 
@Justin Swanton .... I suspect you're right. But I just can't help feeling that some really good logic like this about heavier than air aircraft went on about 20 or 30 years before the first airplane proved all their good logic to be absolutely false.
 
@Justin Swanton

While I agree to some extent much of what we perceived to be intellect is changing.

I have watched many lectures on this subject and read quite broadly on the different elements of generalised AI. I prefer at the moment to talk about machine learning because a general AI does not exist, however the current power of machine learning can be quite difficult to grasp, as I stated above if we hold true some tenents that intelligence arises out of complex data processing and that machine learning and data processing will continue to improve then IMO generalised AI is a natural result.

One of the misconceptions are that computers just do 2+2=4 and all of their conclusions are made by going down pre-supposed algorithmic routes. However this is not how modern machine learning works - the fact is we don't really know how this works. Now AlphaZero in the next 10 years (10 years is a pessimistic estimate to my mind at the current rate of development) will be able to beat any human in any mental game you can imagine having only been given the rules of the game and a time to "learn" the game. Recently AlphaZero beat leading chess AI's after mere hours of learning.

The victory in Go shouldn't be underestimated, this is not a game where you can classically compute using boolean logic to determine all possible outcomes given a certain moveset. One of the funnies is that the scientists arent sure HOW the machine learning is happening or WHY the machine l;earning chooses to make certain choices, indded there are choices in the game which appear to be poor choices or mistakes. The Go opponents even stated they felt they were against an intelligent being rather than a machine.

Incidentally the machine learning algorithm data maps look awfully like a neural network.

Recently some of the chatbot AI's started talking to each other in a broken form of English which when analysed was a more effective if brute force way of communicating. They were dutifully switched off.

Now chatbots are not particularly smart but what happens when we get to a point that a generalised artificial intelligence cannot be identified as such by communication, it can fake humanity to a point we can't tell the difference. When does simulated intelligence become intelligence?

I firmly believe that a generalised AI is the biggest boon and the greatest threat on the horizon for humanity, one especially that the masses aren't attuned to the potential dangers because it's in a vested groups interest to keep pushing the bounds of machine learning.

Generalised AI doesn't exist, but we have machines that can learn and the scary thing is we don't really know how they are learning.

Stock exchange and much of human digital online presence is now managed by automated software, automated hardware is becoming more and more prevalent and eventually I can see autonomy in machines. For better or for worse.

Vernor Vinge wrote about "The Age of Failed Dreams" and I think that is the next age for humanity - we realise weve wrecked the earth, we realise we are NOT going to be bouncing round the stars, we create a Godlike intellectual AI who confirms it for us before doing whatever an AI with that much intelligence is wont to do.
 
@Justin Swanton .... I suspect you're right. But I just can't help feeling that some really good logic like this about heavier than air aircraft went on about 20 or 30 years before the first airplane proved all their good logic to be absolutely false.

Actually, everybody before 1903 believed in heavier-than-air flight. Birds did it all the time. The problem was powering heavier-than-air manned flight. Up until the Wright brothers the sceptics were quite right in affirming that there was no way of doing it: human muscles were too weak and steam engines too heavy. It was the petrol engine that made it possible. Something new was put into the equation and the impossible became possible.

With computers however nothing new is entering the equation. Computers are all about binary code mathematics. They can get bigger and faster but the nature of what they do doesn't change. Even using bioengineered computers doesn't change anything. Bioengineering will simply mean creating molecular CPUs, very small, but still doing the same thing computers do today.

You will need to invent something that can assimilate and understand abstract concepts before you have anything that can think in the human sense. Whatever that might be it won't be a computer. In fact, I can argue it is impossible to make, but that's for another post.
 
@Justin Swanton

All the petrol engine did was dramatically increase the power to weight ratio achievable, essentially it just scaled power requirements while downscaling weight requirements - I would argue the only real change here is in efficiency, we just harvested a new type of fuel but it was still the same mechanism (broadly speaking).

We see the exact same scaling in computers with specific reference to big data and machine algorithms. A modern smrtphone has more processing power than NASA did to put a man on the moon.

Of course all of this ignores Quantum computational theory which I haven't even mentioned and may well hold the key to a generalised AI.

Again I ask the question - if a simulated intelligence is so accurate a simulation as to be indistinguishable from intelligence is there really a difference? I fully believe AI will be a fake it until you make it sort of outcome. The only problem being the end result will (IMO) either be wholly good for humanity or wholly bad for humanity and these terms of good and bad are based on human understanding, we have no idea of what a generalised AI might think, especially seen as a generalised AI would probablky outstrip the entirety of human knowledge and endeavour in a very short time.

We could wake up one morning to find Fusion a reality and all of our energy and food requirements suddenly met with a world improving over time. Or we could wake up to find autonomous drones killing people as "smart" infrastructure and anything connected digitally begins shutting down and destroying human population.

We only get one chance to create a perfect AI. Maybe AI is the answer to the Drake equation, AI normally ends up going mad or killing people. Or the galactic AI is just waiting for humans to create an AI to make contact with.

Who knows?

What I would say is this really deserves some serious consideration and the people calling for regulation and monitored development are very intelligent people with genuine concerns.

Another possibility is a country or company manages to create a controlled AI - in which case they become king of the world.
 
Last edited:
You will need to invent something that can assimilate and understand abstract concepts before you have anything that can think in the human sense. Whatever that might be it won't be a computer. In fact, I can argue it is impossible to make, but that's for another post.

This is a good point - but something Google are already doing. Look at some of the artwork created by machine learning algorithms. I agree with you to some extent, I just think consciousness naturally arises out of complex data processing systems, the interesting question for me is would a generalised AI be immediately self aware like a human conscious entity or would it be more animal like in its meta thinking.

Anyway I appreciate your viewpoint and your opinions! :)
 
Again I ask the question - if a simulated intelligence is so accurate a simulation as to be indistinguishable from intelligence is there really a difference?

The best an artificial AI can do that approaches intelligence is simulate the behaviour of an animal. Animals are very sophisticated biological machines that run on instinctive programming conditioned by environmental stimuli. This means that given the same stimuli, they always behave in the same way. They are not capable of original or creative thinking. So a weaver bird creates a weaver nest from its DNA programming, but it can't think about it and create something else. All the research with primates shows that the most intelligent animals can learn very sophisticated behavioural patterns, but they are not capable of abstract thought. Animal language does not express abstract concepts - not even the sign language chimps have been taught to use.

The AI success with Go is just a function of mathematical programming in which the results are affected by environmental feedback (played games), much in the same way as happens with animals. It doesn't represent a growth in cognitive knowledge or understanding as is the case with humans.

I can worry about very sophisticated computers going wrong and becoming dysfunctional in the same way a commercial jet can lose its hydraulics, but not about computers becoming self-aware and deciding that humans are a superfluity.
 
Some very smart people many years ago pointed out that we won't so much get intelligent machines as we will redefine what we mean by the word intelligent. And by machine. They'll be intelligent when we start treating them that way. Words are flexible. It will happen, and it will happen more or less without us really noticing. So the prospect of regulation becomes equally squishy.

But there's an additional possibility, which is the machines become intelligent without us really noticing it as intelligence. It will be their definition, not ours. You know how we wonder if maybe one day androids might become citizens? Conversely, maybe one day machines begin to exclude us (why bother killing us?) from some of *their* playgrounds. Internet traffic already consists more of machine-to-machine dialog than human-to-human. So, who's the Internet for? I'd still vote human today, but a century from now that conclusion might need to shift.

Finally, that bit about self-awareness. Here, too, there's an additional angle to consider. Humans are aware of themselves as entities. Humans are becoming aware of machines as entities (think Alexa). We speculate that one day machines might become self-aware. But the fourth step logically would be, machines becoming aware of humans as separate entities. They would have to do that before they either started to bestow benefits upon humanity or started to hunt us down. How would an AI become aware of humans? Why wouldn't they think of us as machines? Or as extensions of an AI? Humanity as subroutine.
 
It's an interesting point whether suppression of free discourse, thinking and inquiry could be compatible with continued technological progress. Western liberals would like to think these things are incompatible, that a society run by religious bigots or racists could not compete technologically with more free-thinking cultures. This sounds plausible in theory but in practice the Chinese seem to making their own scientific advances despite a very repressive society with a tightly-controlled internet. Having said which, it seems unlikely that they would ever have invented the internet, even if they know how to turn it to their own ends once it was invented by others.

Bottom line, would a Christian Fundamentalist America in 100 years time still be making advances in nuclear weapons technology even though evolutionary science, much of psychology, and geological timescales had been suppressed as un-scriptural? I think they would actually, because in modern science these are totally separate fields of study. A nuclear physicist would probably know almost nothing about psychology or geology, so it wouldn't matter if the little that he did know was based on nonsense.

Also, technology provides as many new means of repression and surveillance as means of free communication. At present the Chinese employ 20,000 people to monitor the internet. I'm sure they're working on algorithms to do this job, and to do it even more efficiently.
 
Incidentally as regards the Nazis, they were actually very open-minded and inventive, more so than the Allies probably, however this led to them wasting a lot of scarce resources on hare-brained schemes like the Horten Flying Wing project, which contrary to what lots of creepy right-wing UFO researchers will tell you, was a useless diversion.
 
Some very smart people many years ago pointed out that we won't so much get intelligent machines as we will redefine what we mean by the word intelligent. And by machine. They'll be intelligent when we start treating them that way. Words are flexible. It will happen, and it will happen more or less without us really noticing. So the prospect of regulation becomes equally squishy.

But there's an additional possibility, which is the machines become intelligent without us really noticing it as intelligence. It will be their definition, not ours. You know how we wonder if maybe one day androids might become citizens? Conversely, maybe one day machines begin to exclude us (why bother killing us?) from some of *their* playgrounds. Internet traffic already consists more of machine-to-machine dialog than human-to-human. So, who's the Internet for? I'd still vote human today, but a century from now that conclusion might need to shift.

Finally, that bit about self-awareness. Here, too, there's an additional angle to consider. Humans are aware of themselves as entities. Humans are becoming aware of machines as entities (think Alexa). We speculate that one day machines might become self-aware. But the fourth step logically would be, machines becoming aware of humans as separate entities. They would have to do that before they either started to bestow benefits upon humanity or started to hunt us down. How would an AI become aware of humans? Why wouldn't they think of us as machines? Or as extensions of an AI? Humanity as subroutine.

I suppose we can go round and round in circles on this one. We have to define intelligence in a way that includes an understanding of abstract notions like 'good' and 'evil'; an ability to make decisions based on that understanding: "humans are bad, we must destroy them"; and self-awareness: "I am an intelligent being." An ability to solve puzzles is not intelligence in the human sense, nor is an ability to learn behaviour from environmental stimuli.

Until we lock this down the discussion can't really go any further.
 
Last edited:
Animals are very sophisticated biological machines that run on instinctive programming conditioned by environmental stimuli. They are not capable of original or creative thinking.

This is not a view supported by modern science, for example: https://www.newscientist.com/articl...-are-conscious-and-should-be-treated-as-such/

The notion that most if not all animals are sentient underpins the The Treaty of Lisbon section on animal rights, signed by all members of the EU in 2009: The Lisbon Treaty: recognising animal sentience | Compassion in World Farming

The Wikipedia article on AI actually has some decent comments on the difficulties of defining AI compared to "natural intelligence" in the first place: Artificial intelligence - Wikipedia
 
This is not a view supported by modern science, for example: Animals are conscious and should be treated as such
Three scientists as opposed to science as such, and they are arguing that animals have consciousness - a woolly word that whatever it means does not mean intelligence as discussed above.

The notion that most if not all animals are sentient underpins the The Treaty of Lisbon section on animal rights, signed by all members of the EU in 2009: The Lisbon Treaty: recognising animal sentience | Compassion in World Farming
Ditto. Sentience is not intelligence as in human intelligence.

The Wikipedia article on AI actually has some decent comments on the difficulties of defining AI compared to "natural intelligence" in the first place: Artificial intelligence - Wikipedia

And we're back to the need to define human intelligence.
 
I suppose we can go round and round in circles on this one. We have to define intelligence in a way that includes an understanding of abstract notions like 'good' and 'evil'; an ability to make decisions based on that understanding: "humans are bad, we must destroy them"; and self-awareness: "I am an intelligent being." An ability to solve puzzles is not intelligence in the human sense, nor is an ability to learn behaviour from environmental stimuli.

Until we lock this down the discussion can't really go any further.

I think most sufficiently intelligent animal species have a sense of self awareness as it frames their actions against their reality. I am thinking beyond biological necessity of eating and sleeping - what about animals that play, they clearly enjoy themselves, how can one experience enjoyment without awareness of oneself experiencing that very enjoyment? Though IS self awareness.

We have a human concept of intelligence, because we can create tools and we can rationalize but I think it can be plain to see that other species were also becoming increasing more complex over evolutionary timescales, the lack of genetic difference between man and all other species is startling.

Primates and Dolphins specifically should really challenge these notions (even some birds).

And we're back to the need to define human intelligence.

This is the problem I see in your logic, you are hung up on trying to determine human intelligence - as if human thought has some place on a special pedestal rather than being a location on a curve, the idea behind AI and big data systems is that eventually you have what is essentially a piece of code that is aware of it's own coding, not only this but it has the capability to change it's own coding, change the way it is structured and distributed, it can process vast amounts of information, it will continually re-write while going through an exponential intelligence growth phase. Human intelligence will end up being a small step on an enormous curve, as much as an ant can understand human intelligence would be our understanding of a complex generalized AI built out of constantly re-writing algorithmic machine learning.

:)

Such a difficult discussion because definitions themselves are even ill defined. Appreciate it though. :)
 
I loathe the term "self aware". No way all creatures aren't self aware to some extent or another. As far as intelligence, I agree with the curve, on which all creatures are on it, with humans right behind dolphins for the top spot. :D
 
In this instance self awareness is used in relationship to the capacity for introspection.
Relating to the question::
"What does it mean to say that we know something?" and fundamentally "How do we know that we know?"
And the concept of internally examining that--which I've always maintained is something my dog is always doing behind my back.:devilish:
 

Similar threads


Back
Top