All that can go wrong with AI - AI Risks

CultureCitizen

Well-Known Member
Joined
Feb 14, 2023
Messages
124
This is a list of the risks I perceive we currently have, with the current level of Artificial intelligence. Feel free to add more risks and discuss them.

Current risks:

Information - the proliferation of fake news backed with images and voice.
Education - Students may feel inclined to let AI do the homework. Students may feel discouraged to take some careers where AI is highly involved.
Fraud - Frauds powered by AI: voice impersonation can facilitate identity theft.
Jobs - Replacement of some white collar jobs: graphical design, call centers, music composers, translators.
Financial instability - Financial Asset speculation powered by AI may increase
Military - AI-powered drones and drone swarms and combined arms will give an advantage to any army using AI to support military operations.
 
I think the really dire problems will be subtle and unexpected.
 
And someday , all telemarketers will be AI.
 
This is a list of the risks I perceive we currently have, with the current level of Artificial intelligence. Feel free to add more risks and discuss them.

Current risks:

Information - the proliferation of fake news backed with images and voice.
Education - Students may feel inclined to let AI do the homework. Students may feel discouraged to take some careers where AI is highly involved.
Fraud - Frauds powered by AI: voice impersonation can facilitate identity theft.
Jobs - Replacement of some white collar jobs: graphical design, call centers, music composers, translators.
Financial instability - Financial Asset speculation powered by AI may increase
Military - AI-powered drones and drone swarms and combined arms will give an advantage to any army using AI to support military operations.
None of these issues, however, originated with AI. They are existing issues that may (likely) have AI applied to them.
 
None of these issues, however, originated with AI. They are existing issues that may (likely) have AI applied to them.
Yeah, it isn't like political conspiracies had to make any sense at all to be effective. So how is an AI going to improve on them?
 
None of these issues, however, originated with AI. They are existing issues that may (likely) have AI applied to them.
The education aspect is completely new.
The point of military action carried out efficiently by machines is completely novel : we have not seen it yet. A relentless and ruthless optimized killing spree supported by automated logistics and artificial intelligence. The ongoing war in Europe is bad enough as is, fully automated it would be terrifying.
 
I actually asked ChatGPT what people could do to curb spread of misinformation:


"The emergence of deep learning technology has made it possible for AI models to mimic human speech with a high level of accuracy. While this technology has many beneficial applications, it can also be misused in harmful ways, such as impersonating someone for fraudulent or malicious purposes. Here are some ways we can prevent bad things from happening as a result of AI-generated voices:

Legal regulations: Governments can enforce laws that prohibit the use of AI-generated voices for fraudulent purposes. For instance, in the US, California has passed a law that makes it illegal to use deepfakes or AI-generated videos or audios for political purposes without disclosing that they are fake.

Education: Raising awareness among the general public about the existence and potential impact of AI-generated voices can help people recognize the signs of a fake voice. Education campaigns can also provide information on how to report suspicious activity.

Authentication techniques: Voice biometrics and other forms of voice authentication can be used to verify the identity of the person speaking, making it harder for bad actors to use AI-generated voices to impersonate someone else.

Development of countermeasures: Researchers are working on developing algorithms that can detect and identify AI-generated voices. These countermeasures can help to identify and flag any content that is not authentic, reducing the impact of fake voices on society.

Ethical guidelines: Technology companies can adopt ethical guidelines that promote the responsible use of AI-generated voices. Such guidelines can ensure that these technologies are used for legitimate purposes and not for malicious or fraudulent activities.

In conclusion, preventing the misuse of AI-generated voices requires a multi-faceted approach that involves legal regulations, education, authentication techniques, the development of countermeasures, and ethical guidelines. These efforts can help to mitigate the potential risks associated with this technology and promote its responsible use."


I do understand that there's more than just one factor, though, and that the program could very well be flawed.
 
I also asked ChatGPT and it disagreed with what it told you.
 
Education - Students may feel inclined to let AI do the homework.

The education aspect is completely new.
Even in my days going to college (1975-1979), there were term papers for sale, people would do homework for others, people would even try to take exams for others. I recall several years ago, there was a big push for professors to use computers to detect submissions that were identical to or very similar to know report selling sites. "Cheating" has long been a bugaboo of higher education and AI-generated responses is merely the newest technique in an age-old problem
 
Even in my days going to college (1975-1979), there were term papers for sale, people would do homework for others, people would even try to take exams for others. I recall several years ago, there was a big push for professors to use computers to detect submissions that were identical to or very similar to know report selling sites. "Cheating" has long been a bugaboo of higher education and AI-generated responses is merely the newest technique in an age-old problem

Students may feel discouraged to take some careers where AI is highly involved.


I was not refering to that part, I was refering to the lack of motivation because can generate art and even short videos with a prompt.
It can also generate music, that has an effect on students that was previously non-existant.
 
I was not refering to that part, I was refering to the lack of motivation because can generate art and even short videos with a prompt.
It can also generate music, that has an effect on students that was previously non-existant.
That, however, is what automation does. It takes challenging rote tasks and reduces the required skill level needed and allows the untrained, general population the ability to do something and rely on a skilled expert. I remember when typing accurately without looking the keyboard was a prized skill. Students could hire someone to type their term papers. Offices had a typing pool of people who turned hand written pages into typed documents. Driving a stick shift was once a necessary skill, now it has become unnecessary. Photography reduced the value of someone who learned how to faithfully reproduce a person's image by manipulating brushstrokes and carefully mixing colored paints. Autofocus and image identification reduced the need for professional photographers who knew how to set lighting just so and manually adjust focus and f-stops and shutter speeds. Now people who haven't bothered to learn how to adjust images in a graphical design program can create interesting pictures by only describing their intents. Horrors.

I try to take a balanced approach towards AI and view it as further augmenting human skills and broadening access to results to the general population, especially within fail-safe activities. I would define these as activities where the consequences of failure are trivial. In cases where the consequences, even if low probability, are significant, then I urge caution. The failure modes of AI systems are unknown and there are not mechanisms to analyze or determine what failures might occur. Replacing humans completely in these scenarios is foolhardy as is a learn by doing approach.

AI has the ability to make certain actions available to the general population while decreasing the need for certain specialist skills. This is not new. AI, though, should not be seen as being able to replace humans and a human in the loop approach to all decisions continues to be necessary.
 
That, however, is what automation does. It takes challenging rote tasks and reduces the required skill level needed and allows the untrained, general population the ability to do something and rely on a skilled expert. I remember when typing accurately without looking the keyboard was a prized skill. Students could hire someone to type their term papers. Offices had a typing pool of people who turned hand written pages into typed documents. Driving a stick shift was once a necessary skill, now it has become unnecessary. Photography reduced the value of someone who learned how to faithfully reproduce a person's image by manipulating brushstrokes and carefully mixing colored paints. Autofocus and image identification reduced the need for professional photographers who knew how to set lighting just so and manually adjust focus and f-stops and shutter speeds. Now people who haven't bothered to learn how to adjust images in a graphical design program can create interesting pictures by only describing their intents. Horrors.

I try to take a balanced approach towards AI and view it as further augmenting human skills and broadening access to results to the general population, especially within fail-safe activities. I would define these as activities where the consequences of failure are trivial. In cases where the consequences, even if low probability, are significant, then I urge caution. The failure modes of AI systems are unknown and there are not mechanisms to analyze or determine what failures might occur. Replacing humans completely in these scenarios is foolhardy as is a learn by doing approach.

AI has the ability to make certain actions available to the general population while decreasing the need for certain specialist skills. This is not new. AI, though, should not be seen as being able to replace humans and a human in the loop approach to all decisions continues to be necessary.
Well, that's the problem, because AI will make these changes extremely fast across all the knowledge areas.
Workers will still be needed but in smaller numbers.
 
Well, that's the problem, because AI will make these changes extremely fast across all the knowledge areas.
Workers will still be needed but in smaller numbers.
I don't really agree with this. Your logic has surely been disproven by now. We were all supposed to be on three-day weeks because of a process of automation that began decades ago. But Capitalism doesn't work like that. Enterprises pop up to consume available resources (not the other way round). If people are replaced by machines in one industry, another will pop up to utilize them as a resource. What worries me is that it is often low-level jobs where automation and AI finds it hardest to replace people.
 
I don't really agree with this. Your logic has surely been disproven by now. We were all supposed to be on three-day weeks because of a process of automation that began decades ago. But Capitalism doesn't work like that. Enterprises pop up to consume available resources (not the other way round). If people are replaced by machines in one industry, another will pop up to utilize them as a resource. What worries me is that it is often low-level jobs where automation and AI finds it hardest to replace people.


I think that it's in all government's interests to keep people occupied with work for lots of reasons. To some extent it probably is in our best intersts too (although it doesn't feel like it on a Monday morning!) I agree that as one avenue of work cuts off, another will open. Think how much automation has changed many industries. Back in Victorian Britain, factories were filled with people making things, or involved in farming, or down the pits mining etc). Most of those industries are now gone or have been partially or fully automated; but in their place other jobs have become needed.

When it comes to AI going rogue, rather than Skynet I think of the old Star Trek episode A Tate of Armageddon, were computers calculate the simulated fatalities in a simulated war; the required number of citizens are then expected to enter 'disintegration chambers' to be executed. The thing is that the computers made the decisions, but the humans had become so blinded by relying on technology that they couldn't see that there was a simple alternative where no-one had to die.
 
I don't really agree with this. Your logic has surely been disproven by now. We were all supposed to be on three-day weeks because of a process of automation that began decades ago. But Capitalism doesn't work like that. Enterprises pop up to consume available resources (not the other way round). If people are replaced by machines in one industry, another will pop up to utilize them as a resource. What worries me is that it is often low-level jobs where automation and AI finds it hardest to replace people.
In the sentence "Workers will still be needed but in smaller numbers. ", does not imply wealth will be distributed equitably between all mankind.
It just implies what it says: the same number of people, hence fewer jobs and more unemployment.
And indeed, the current iteration of automation will affect mostly white-collar workers.
 
Enterprises pop up to consume available resources (not the other way round). If people are replaced by machines in one industry, another will pop up to utilize them as a resource. What worries me is that it is often low-level jobs where automation and AI finds it hardest to replace people.
I basically agree with this thought, but I would phrase it slightly differently. Reduction in the levels of effort and tedium in repetitive tasks does not imply that there will be a reduction in personnel unless that specific market is already saturated. It is also likely that there may simply be an improvement in the quality of the goods produced, but necessarily an increase in quantity. In the near future, I see much of the AI displacement occurring for already automated tasks as AI approaches push out existing approaches. The result will be simply improved internet searches, better spelling and grammar checkers, etc.

The positions most likely to be threatened by the introduction of AI techniques are the areas that are already under pressure by automation. Things aren't going to jump the shark and lead to widespread unemployment of white collar workers. I do recognize that people are not fungible and it is of little value to the displaced that new job positions open up, as often they are unqualified for the new positions. This is certainly a stressor on society, but it didn't start with nor do I believe that it will increase because of the utilization of AI-driven automation.
 
Well, that's the problem, because AI will make these changes extremely fast across all the knowledge areas.
Workers will still be needed but in smaller numbers.
I understand what you are saying CC but exactly the same was said when the 'computer age' started in the '50s and '60s. What actually happened was that new industries started up. As a minor example, some of the wealth generated allowed the leisure / foreign holiday industry to blossom. My parents had never been abroad (apart from my dad's brief visit to Dunkirk!) but now it's commonplace.

Edit: Looks like I've repeated what Christine was saying more eloquently than I have.
 

Similar threads


Back
Top