The future of AI

Harpo

Getting away with it
Joined
Sep 23, 2006
Messages
3,053
Location
The edge of the world. Yes, really.
Artificial Intelligence (AI) is currently of the Narrow type (ANI).
Someday in the future there will also be Artificial General Intelligence (AGI) and sometime after that will be Artificial Super Intelligence (ASI)

26BE3B15-8FC3-443E-B50E-F25C5C8D7081.png

This illustration says we are currently at the AGI stage, but I don’t think we’re there yet.
 
Artificial Intelligence (AI) is currently of the Narrow type (ANI).
Someday in the future there will also be Artificial General Intelligence (AGI) and sometime after that will be Artificial Super Intelligence (ASI)

View attachment 104255
This illustration says we are currently at the AGI stage, but I don’t think we’re there yet.
I'm not sure that this diagram has meaning when there is no single definition of intelligence. There is a lot of literature suggesting multiple types of intelligence and an internet search on "how many types of intelligence are there?" yields responses ranging from 4 to 12. For amusement, I entered the question into ChatGPT and it merely recited one of the responses I got from my search.

Given the multitude of types of intelligence, I don't believe that a single triangle could possible reflect anything about progress towards some vague, aggregate representation of intelligence.
 
It’s the future application of the technology that bothers me.

What has surprised me about AI is the way companies already appear to be leaping in to find ways to utilise it. This despite it still being in its infancy.

You’re probably asking why I’m surprised. Having spent thirty two years working for a very large company, I know how conservative they tend to be when it comes to anything IT. They tend not to just jump in and install the latest version of Windows but prefer to stay with an older version until the newer version becomes proven to be stable. Microsoft actually facilitates this by providing extended contracted support for operating systems they no longer support for the home user.

And yet, there have already been announcements regarding AI replacing significant numbers in the workforce over the next decade despite the lack of regulation (or perhaps because of the lack). It is still developing and it’s still too early to really know where this goes so (according to my inner cynic) the only reason that a normally tech conservative company would act in this manner is to reduce overheads (cut jobs) and increase profit and not because it will be beneficial to the customer.

I wonder if we might see a repeat of the strife automation caused when it first began to play a major part in industry.

The story goes (possibly apocryphal) that mill workers, upset by new automated looms threatening their work, threw their sabots (wooden shoes) into the machinery and gave us the word ‘sabotage’. If AI does threaten the jobs of some workers, I wonder if a new word will emerge to describe any subsequent protest actions.
 
I think that industry and corporations will want to push ahead with AI as quickly as possible. First it's the future (and in business terms 10 years is tomorrow) secondly those not on the bandwagon now will be miles behind their competitors. But if they can get it embedded so that it becomes integral to the running of operations, it will be much more difficult for any legislation and laws later to try to circumvent or repress it. So laws and regulations (when they are inevitably introduced) will have to work around existing AI structures rather than preventing or helping with their development now.

Business and corporations can move fast when their futures are in jeopardy and where there is money to be made; those who make the laws and regulations (ie government) tend to move much more slowly, and in a case where computer technology such as AI is concerned probably moreso, as they are less likely to be knowledgeable about it. We've seen in the past with computer games how this is the case when it comes to regulation. Usually the people involved will be asked to self regulate, and then if that doesn't work then regulation will be imposed. But by that stage AI will be firmly embedded.

I think we can see in the news now that companies are moving forward at an increasingly rapid speed at looking at AI technology, and we are starting to see the murmurings in newspapers, and the odd comments from politicians, about concerns relating to AI. Only a few days ago Tom Hanks was saying in the news that with the aid of AI he could continue to make films for years to come, the art world is concerned about fake AI, the literary world is concerned that human authors will be pushed to one side with a flood of computer-produced stories, and BT are on the verge of making thousands unemployed and being replaced by AI.


This is only the very tip of the iceberg.
 
I wonder if in about 50 years time, when we have finally overthrown our AI overlords and, if by some chance, this forum still exists, if we will see a post in the Vintage adverts that are alarming or strange thread. Something like: AI - For a brighter tomorrow :)
 
Here’s a question that bothers me.
How can anybody using AI be certain that the program was not intentionally biased by the original programmers? This could be done to push a particular ideology over other options.

It seems to me that, in the future where there might be a selection of AI codes available, we might need something analogous to a virus checker to ensure AI impartiality and make an informed choice on the application to use.
 
Here’s a question that bothers me.
How can anybody using AI be certain that the program was not intentionally biased by the original programmers? This could be done to push a particular ideology over other options.
I don't know it has been done intentionally, but it has been done unintentionally. An AI is only as good as the information it is given. If that is faulty/biased then so will what results from it. And who is to say what is faulty, biased or right?
 
Here’s a question that bothers me.
How can anybody using AI be certain that the program was not intentionally biased by the original programmers? This could be done to push a particular ideology over other options.

It seems to me that, in the future where there might be a selection of AI codes available, we might need something analogous to a virus checker to ensure AI impartiality and make an informed choice on the application to use.
That is a problem with current Machine Learning types of AI, though, to be precise, it isn't programming bias, it's bias in the training materials. Facial recognition has had problems with minorities due to underrepresentation in the training data sets. I'm not sure that the problem with outliers will ever be solved and, unfortunately, people have a bias to accepting the results from a computer more than from personal judgment.
 
Humans are fallible. Everything humans create is fallible. Therefore AI is fallible.

It will not always work the way we want it to. And the day will come when a computer really will say 'no'; it's what happens then that is the question. The modern world's reliance on computers is a Damocles sword hanging over our heads; and the thread is thinning.
 
Humans are fallible. Everything humans create is fallible. Therefore AI is fallible.

It will not always work the way we want it to. And the day will come when a computer really will say 'no'; it's what happens then that is the question. The modern world's reliance on computers is a Damocles sword hanging over our heads; and the thread is thinning.
There is always the plug ;)
(Edit: Tried to stick the Zappa 'Dumb all over and maybe even a little ugly, just like us' clip ....but computer said no, or whatever the 'puter equivalent of no is)
 
I don't know it has been done intentionally, but it has been done unintentionally. An AI is only as good as the information it is given. If that is faulty/biased then so will what results from it. And who is to say what is faulty, biased or right?
To some extent my answer to that is to look to the original scientific method - how the world acts is not set by any opinion, even an AI's. The AI is just giving you it's opinion, based on what it's read. In that way it's like a lot of voices on the internet! If you want to know - as much as anyone can - which model of a thing or phenomena it's best to use when dealing with it, you have to go and examine the world itself, offline and in person, and compare what the models predict with how the world acts (and probably pick the least imperfect and tweak it as new evidence comes in).

I wonder if the rise of the internet as such an indiscriminate communication tool for both information and mis-information actually means that the age of being able to look up reliable 'facts' is ending, and all we will be left with is doing the leg work to check how the world actually acts against the untrustable opinions of natural and artificial intelligences alike.
 
I wonder if the rise of the internet as such an indiscriminate communication tool for both information and mis-information actually means that the age of being able to look up reliable 'facts' is ending, and all we will be left with is doing the leg work to check how the world actually acts against the untrustable opinions of natural and artificial intelligences alike.
Agreed. It's the theme of my novel The Autist.
 
One thing I have noticed is that, at the moment at least, having access to these AI still doesn't make you an expert on anything you want to be, because you still need to know the right questions to ask and prompts to give. It speeds up the researching process hugely - but if you don't know 'X is relevant to Y so I need to ask about X' you'll still miss that bit of detail. At this point I think a basic grounding in a topic is still needed to get the best use out of these entities. However I may be a bit guilty of motivated reasoning there... :D
 
Last edited:
[AI] speeds up the researching process hugely
But does it? Does getting a syntactically correct list of summary points really replace research? Often times when I am researching topics, I am looking for the odd outlier pieces of information and I am looking to combine minority views into a coherent whole that that is both logical and outside the mainstream view. I am uncomfortable with getting multiple pages of links to full articles replaced with a couple hundred word summary.
 
But ....summary.
I think we're talking on a very similar point: The AI gives me a surface level answer to a specific question faster than I can research it using a normal search engine. But it doesn't provide me with the deeper level nuance and knowledge and, more significantly perhaps, it doesn't show me where those gaps are, or where interesting links to other topics (especially the kind of unexpected links - like neutron stars thousands of light years away being connected to extinctions of certain species on Earth) are.
So it depends on what level of research you're going for - but it does provide that surface level answer faster. I suspect there will be people falling over that distinction already. But, where I've known the right questions to ask, and have known where there are links that are of interest between topics, I have found it useful as compared to a normal search engine, so I don't want to be dismissive of it as a useful tool.
 
This may not be a very safe way of thinking about these things, but I find it easier to process on a level of 'human' interaction: I have a 'guy' who is amazingly good at ferreting out a specific answer to a question for me. But that guy, however fast and perceptive, cannot actually dump large blocks of knowledge and expertise into my mind wholesale. I still need to do the learning to provide the context, nuance, and questions myself. E.G I can ask 'who won the world cup in 1966' and get the answer, and perhaps a hint of any controversy surrounding referee decisions. But I won't get 'the striker used this style of kick to score the winning goal, this is why that made sense from that position, and here are ways it would have played out if he'd picked a different kick' - at least not unless I already know the game well enough to know that it makes sense to ask those things, and that they're questions I might find relevant to what I'm doing.
And, as has already been noted above, I can't be sure that my question finding guy isn't unreasonably biased in the answer they give, either by intent or by accident. They can tell me 'England vs Germany, England won, winning goal was allowed on refs decision but controversial'. It's not likely to tell me that even modern England players, on seeing the footage played back, look pretty uncomfortable publicly saying that ball was more over the line than not. Or that the rules of the time state pretty clearly that it is the refs decision that makes it an allowed goal, and that decision is final. It doesn't tell me the ref's name, or how making that call affected his life. Those things are where I, and a lot of others, find an interesting story to be had - but if I don't know enough about the event, already, to think to ask my guy will never know to tell me. And, if trained my a 'misguided' England fan, it may not tell me even if I do ask.
 
Last edited:

Back
Top