A.I. (general thread for any AI-related topics)

A sobering thought for all the puerile uses AI is being put to.

From the article: Professor Gina Neff of Queen Mary University London told the BBC ChatGPT is "burning through energy", and the data centres used to power it consume more electricity in a year than 117 countries.
 
Here's an interesting article about AI as BS, in the technical sense of the expression. "The machines are not trying to communicate something they believe or perceive. Their inaccuracy is not due to misperception or hallucination. As we have pointed out, they are not trying to convey information at all."

 
Thearticle
interesting article about AI as BS
I like the article, it makes sense but I could say that article was much more complex than it needed to be. It needed to have a few bullet points starting it off instead of the long winding approach where it made a statement and then explored that statement and continued on in that manner. There some very interesting ideas.

The bottom line is that criticizing the machines using terms that imply that the machines are thinking only serves to further the claims that the machines are thinking. In particular, the accepted definition of hallucination or lying is something that only living entities can do. Saying the machines are hallucinating or lying is saying that it is alive and perceiving its environment, although inaccurately. Saying the machine is spouting BS makes it sound stupid, and any other negative terms that go along with that behavior. But using the term BS is something only living entities can do so it looks like the article is trying to politely say that AI is a scam, which lays the blame on the humans who created it, without saying it.

One bullet is that its only a machine spouting words that people recognize. "This means that their primary goal, insofar as they have one, is to produce human-like text. They do so by estimating the likelihood that a particular word will appear next, given the text that has come before. the machines were designed to generate human sounding conversation." Nothing more.

Another bullet is much more damning, "OpenAI has plans to rectify this by training the model to do step by step reasoning (Lightman et al., 2023) but this is quite resource-intensive..."

The programs are not "executing code" line by line, but doing something carefully, then jumping a bit, then carefully doing something again, then jumping. This is why it can't perform simple math which requires line by line accuracy. Even if it was directed to a math module to get a correct answer, it can't stay with the math module and wait for an answer. At least it can't do that without using a lot more programing, energy, and money.

If it is only actively "running" a percentage of the time it is running, then the energy demands would go way up if it was running 100 percent of the time. The operation of the programs might better be described as slot machines.
 
Besides all the other criticism that Ai receives, it has now been said that Ai doesn't always compute an answer for a question it has been asked to solve, instead it looks up the answer up on the internet. In a performance test this could produce misleading results. It could also be described as imitating human behavior. The article that brought this up also invoked "theranos vibes" as a description of the current state of affairs for testing Ai. That is a situation where a machine can do some things perfectly but is completely unable to do other similar activities, making the machine far less valuable than what it is claimed to be.
 
This could be filed under ‘how will AI change journalism?’
I think it important to point out that this is just one journalist’s opinion. I read his pieces regularly and always find them to be balanced reporting of facts and well thought out opinion pieces. Still, that doesn’t mean that he’s right although I do tend to believe he is right.
 
Not sure if this has been posted, but here's an article about LLMs and how they are perceived

What I particularly like is the first comment, which I will quote in full


It's a glorified autocomplete function.

That's all it is.

And people are somehow claiming intelligence, innovation, imagination, creativity, invention and consciousness to them.

Nope.

If anything, it proves how dumb the average human is - a bit like the Turing test, it's not really a test of if the AI is intelligent, it's a test of how gullible/dumb the human testers are. It's misused an awful lot to try to claim new intelligence, but it doesn't actually measure that at all.

The irony is, it's not just a glorified autocomplete function, it's an extremely resource-heavy, poor, unpredictable and *easily compromised* one. Just a few sentences and you can "convince" it to go against all its explicit training to do things that it should never be doing.

And every time someone suggests we should use it, I point at data protection laws, the fact that we have no idea what it's actually doing with any training data whatsoever, and that almost all LLMs and moderns AIs out there are trained on data of very dubious origin.

Some examples I've used to demonstrate from various LLMs include:

- Asked it about fire extinguishers. It literally got everything backwards and recommended a dangerous extinguisher on a particular fire more times than the correct one.

- Asked it about a character that doesn't exist in a well-known TV programme that does. It made up characters by merging similarly-named characters from other TV shows and attributed characteristics from some actual characters randomly to those "invented" non-existent ones... including actor's names, plot elements, etc. So you had actors who'd never appeared in the show "portraying" characters that didn't exist in the show, with plot elements that never happened in the show. No matter how much probed it or changed names, it asserted utterly confidently that in a TV show with only 4 main characters that almost every single name you gave it was an actual character in it and made up bios for them. It will confidently spew the entire synopsis of every episode (so it "knows" what actually happened or didn't), and then still insert its made up characters into the mix after you ask about them, even though that's quite clearly just rewriting history and those characters never existed.

- An employee of mine was given a research project to source some new kit. They plugged it into ChatGPT (against my wishes). It returned a comprehensive and detailed list with a summary of 10 viable products that met the criteria. 5 literally did not exist. 3 were long-discontinued and contained false data. 2 were unreliable specifications of the available products and were nowhere near ideal for the task. And that's something that all it needed to do was scrape "10 best <products>" and it would have got a far better shopping list immediately.

Not to mention that it can't count, can't follow rules, can't infer anything, etc.

And it never takes much to generate examples like that. In fact, each time someone questions this, I think up a new way off the top of my head that I've never tested, run it through an LLM and get results like the above. It's fine if it asserts a TV character wrongly, no harm can result, but if it can't even get that right why would you ever trust it to do things like autocomplete code in a programming project (sorry, but any company that allows that is just opening itself up to copyright claims down the road, not to mention security etc. issues).

LLMs are glorified autocomplete, and if you're putting your primary customer interface, or your programming code repository, or your book publishing output into the hands of a glorified autocomplete, you deserve everything you get.

Link to comments
 
Not sure if this has been posted, but here's an article about LLMs and how they are perceived

What I particularly like is the first comment, which I will quote in full




Link to comments
It's like many of us have been saying all along. Not only is it not AI but it is distracting us from what AI really will be if ever achieved. And what the real dangers and benefits will be, again, if ever achieved.
 

Similar threads


Back
Top