A.I. (general thread for any AI-related topics)

Harpo

Getting away with it
Joined
Sep 23, 2006
Messages
3,864
Location
The edge of the world. Yes, really.
We have several threads on the forum relating to AI technology in various ways. It’s time for a single general thread for all of it together.

Here are some related threads

The oldest of those threads was started less than eight months ago, there may be something older, or something related which doesn’t have “AI” in the title.

Here’s a recent article which prompted me to start this thread.
 
A program writes a story or paints a picture using prompts supplied by you. Do you have to credit the program for what it created or can you just put your name on it and take all the credit.
 
 
I find all of this super interesting. I used to teach in a law school, and for ages there was this expectation that AI would eventually do the dogsbody work of new lawyers rifling through paperwork putting cases together, thereby making all the law students redundant. That still hasn't come to pass, so I'm usually of the opinion that most complex human tasks are impossible to have AI do (unless it truly becomes AI, in which case welcome to either the birth of the Culture or a Terminator scenario). But I love the dark fact that the pursuit of profit overrides the deficiencies of AI, leading to the Self-driving car farce above, or people passing off AI-generated writing or art work for profit. I really liked this post by @Robert Zwilling An Artificial Intelligence Published An Academic Paper About Itself which captures what I mean really well.
I see it best captured in the number of sectors that try to automate stuff, then things predictably break, and it's almost impossible to speak to a human being capable of resolving anything. If that ever moves beyond services for broadband or cashiers, we'll be in a dark place.
 
The writing I have seen by AI is ... it's not great. It's okay but in the scheme of literature, still pretty poor. I think the technology is still very much at the embryonic stage. It's interesting but to really make AI Turing-test-ready we have to understand a lot more about our own thought processes than we currently do, imo.
 
I just got this from ArtStation:

We’ve updated our Terms of Service to make clear that scraping and reselling or redistributing content is not permitted, and to clarify the prohibition against use of NoAI Content with Generative AI Programs.

We have also committed not to use, or license any third party to use, any ArtStation content for the purpose of training Generative AI Programs.
 
I was vaguely thinking when I woke up, the “classic years” of a big new cultural thing last about three decades or so, before something else replaces them as the Next Big Thing (for example, the old rock & pop era has passed, new bands can’t get rich nowadays, that era has ended) and as the current one (the internet) nears its classic thirty-years point, I’m thinking that in each case, the people of a decade before the start mostly didn’t see it coming (apart from a few owners of a ZX Spectrum in the early 80s, maybe) and now this - AI is the next one.

Meanwhile, in the news:

 
I was vaguely thinking when I woke up, the “classic years” of a big new cultural thing last about three decades or so, before something else replaces them as the Next Big Thing (for example, the old rock & pop era has passed, new bands can’t get rich nowadays, that era has ended) and as the current one (the internet) nears its classic thirty-years point, I’m thinking that in each case, the people of a decade before the start mostly didn’t see it coming (apart from a few owners of a ZX Spectrum in the early 80s, maybe) and now this - AI is the next one.

Meanwhile, in the news:

it's nailed the accordion death metal to a tee haha.
 
@Snicklefritz It's a good reason for the including of moral/ethical parameters into the AI programing in order to prevent things like this, or Russian AI chess robots from breaking its human opponent's finger because the human is thinking and reacting faster than it...
 
A few more threads of recent weeks

 
@Snicklefritz It's a good reason for the including of moral/ethical parameters into the AI programing in order to prevent things like this, or Russian AI chess robots from breaking its human opponent's finger because the human is thinking and reacting faster than it...
How much use/how effective would Asimov's three laws be in these circustances?
 
@Snicklefritz In reality, it's just a matter of interpretation as to what the Asimov's three laws are and how the programmers can write it. Plus, how the AI's cortex CPU's design interpretates it, I would think. (Others here on Chrons have more knowledge than me on this.)

Some have noticed that the AI's that get their info from the internet become more aggressive in a short period of time. It has been found that these AI's behaviors are a reflection of the 'human emotions' imbedded in the net. Or as they say, people are becoming angrier, and the big data in the net is reflecting this. (Based on several documentaries I have seen over the years. But again, there are others here more knowledgeable on this then me.)

But before Asimov's three laws can take effect, the AI would need to 'self-aware', which brings up this article I found today...
AI Has Suddenly Evolved to Achieve Theory of Mind
 
I know Elon Musk is not so popular these days, But he has been "warning" us for quite awhile about AI. He also said that the threat is not likely to come from autonomous robots. It will be the central Mainframe (Spoiler: Which did turn out to be the case in iRobot). Whatever you think of Musk, he is highly intelligent when it comes to technology. I still say many remote redundant kill switches would be in order.

The thing that scares me the most is that it is humans that are programming the AI. Mistakes will be made.
 
I know Elon Musk is not so popular these days, But he has been "warning" us for quite awhile about AI. He also said that the threat is not likely to come from autonomous robots. It will be the central Mainframe (Spoiler: Which did turn out to be the case in iRobot). Whatever you think of Musk, he is highly intelligent when it comes to technology. I still say many remote redundant kill switches would be in order.

The thing that scares me the most is that it is humans that are programming the AI. Mistakes will be made.
Some might say that Musk himself provides a caution to the point.
 
I know Elon Musk is not so popular these days, But he has been "warning" us for quite awhile about AI. He also said that the threat is not likely to come from autonomous robots. It will be the central Mainframe (Spoiler: Which did turn out to be the case in iRobot). Whatever you think of Musk, he is highly intelligent when it comes to technology. I still say many remote redundant kill switches would be in order.

The thing that scares me the most is that it is humans that are programming the AI. Mistakes will be made.
If, as is likely, the AI has the ability to self repair (controls autonomous repair robots) then it is highly likely that it would quickly realise and then circumvent such remote kill switches.
 

Similar threads


Back
Top