Why I think AI-generated art is not art

Culture isn't confined to just raising awareness of causes. It's the ideas, customs, and social behaviour of a particular people or society. Cultural impact is the influence and extent an individual or group of individuals has over those ideas, customs and practices.

If I wear a blue tie and someone likes it and gets a similar one, the effect could not be said to be cultural because it’s limited to an individual. However when The Sex Pistols played the Free Trade Hall in Manchester in the late 70’s it gave rise to the entire Manchester Music scene for the next thirty years because they inspired almost everyone in attendance to pick up a guitar and start a band.

In other words there was an identifiable moment when certain people at a certain time create cascading chains of influence that ripple out through Culture to affect a significant number of people. It's the moment when a certain number of strands of phenomena come together into a cohesive entity.

In the world of academia you had the famous talk by Jacques Derrida at Berkley where he expounded his ideas on Post Structuralism and Deconstruction, and that methodology was widely adopted by literature departments almost immediately – his philosophical spark caused a wildfire that spread beyond the study of literature into all facets of life – even beyond Academia into politics, Corporate hiring practices and so on.

These ideas generated by, or explored within art articulate something about the nature of current reality that people might have a vague, unspoken sense of, or might be limited to the Arcane ivory tower of cutting edge philosophy departments but aren’t seen or understood until artists have generated the popular language to talk about it.

This is the difference between the world of craft and the Art world. The Art world is seen as the place these movements originate. Crafts are seen as bereft of novel and important ideas – they’re merely nice looking things for consumption by those outside High Society. Ironically, its this facet – of being at the frontier of cultural ideas - that gives Arts the kind of status that make it desirable to the wealthy.

The supposed Cultural importance of Art at the bleeding edge gives cachet and desirability to the product. Art gatekeepers decide who is regarded as an Artist and what is regarded as Art and the wealthy have the funds to keep this whole system ticking over.

Good post. I think it's clear that there has - and always will be a debate on what art is, what constitutes 'good' art and just what influence it has.

500 years ago Thomas Cromwell went to the block (in part) because of the portrait of Ann of Cleves he arranged for Henry VIII, Da Vinci was accused of being a heretic, and there are many more examples of art causing ripples through history.
I suppose it's easy to say with hindsight, but this didn't look like a 'natural' photograph to me.
One thing is the hands are all wrong.
Just like all beginning artists there is a struggle to get the hands and feet correct and I noticed on the deepdreamgenerator site that the AI often looks to have troubles with hands and feet--not to mention the extra legs and arms and webbed hands and missing or tiny heads. But back to the point in that supposed photo there is one hand that appears to have too many joints in the fingers though that could be passed off as wrinkling--but all the hands in that and the shading under the one hand are suspect.
“Draw me like one of your french girls” :LOL:

"Art is utterly subjective. It is art if you think it is. No-one can tell you different."

1."Art is utterly subjective."

This isn't a philosophical statement. It would more like an expression of autism (not that I actually think the author is autistic). Something "utterly subjective" would have no existence outside the imagination of the person imagining it. The only true statement that can be made on the predicate "is utterly subjective" is tautological: "The utterly subjective is utterly subjective."

2."It is art if you think it is."

Nope. If I see what I think is a painting of a person, but I'm actually looking at a person sitting very still, "it" isn't art. I was mistaken in thinking it was art.

3.I think what's being assumed in the statements I'm commenting on is that "art" is a statement of value, i.e. if something is "art" it should be regarded as good or, at least, if anyone thinks something is good enough to be regarded as a work of art, he or she is entitled to that opinion.

There can be such a thing as bad art, though.

The Soviet poster above is a piece of art. That is, it is an object that has been produced by an imagination for reception by another imagination. (That's how I'd define a "work of art" in a brief sentence.) But it is a bad piece of art. It is kitsch. It is intended to evoke a stereotyped, conventional emotion that will be wholly in the service of a non-artistic purpose -- national pride, etc. It tends to drain away rather than to enhance the imaginative appeal of human beings working with nature. It wants to intimidate, to cow any impulse towards questioning the totalitarian state. It has no feeling for human beings or nature, hardly even for machinery. The artist has produced something that does not invite contemplation -- it is intended, it is meant, to be taken in at a glance. It has no subtlety, no wit, no wisdom. (It doesn't even make sense; as far as I can tell, these tractors are lined up one after another, digging up the same patch of earth. The imagination here at work appears to have no real interest in human beings, or the work that feeds them, or the natural world; it is evidently an imagination wholly subservient to the state and its ideological notion of the "march of progress.")

So a work of art can be bad. If you say something is "art" you are not necessarily saying that it is good. Conversely, anyone who can see no difference between the poster above and, say, a pastoral by Samuel Palmer, is no one I would want to discuss art with. There wouldn't be sufficient common ground.
Last edited:
But, by the end of the year...
Exactly my thinking.
By the end of the year all AI will have improved or been surpassed by new types.

And then by the end of next year, and the year after......and think back to the decade before last to compare the AI of then with now, this forum’s been going for over twenty years.

I can only imagine what the AI of twenty years hence will be able to do.
I've skipped the six pages of comments, but I'm a software developer with experience, including AI, and I'm using Copilot, ChatGPT, Midjourney and Bing drawer a lot as a client, so I'd like to put my 5 cents to the topic...

Principally, AI didn't change since 1950s, when the first significant AIs were made. The only thing that increased is the size and their speed. New ways to use them were also found, but otherwise nothing changed since the very first models.

AI is not an artist, because it only paints an average sum of all the pictures it saw previously. That's how teaching works - the model walks through pictures and tries to find the similarities to create a mathematical approximation - a picture, which'll be an average sum of all the pictures in the selection. Show it 10 circles, and it'll be able to paint a circle, when requested, but it is not able to paint anything else, like a half circle or a sphere.

Of course, developers are making some hacks to make the model behave more "artistic" - they put randomly generated calculation noise on the approximation, so every time the picture is a bit-bit different, but it's just an artificially added mathematical noise, not understanding or artistism.
Generate a lot of images by similar, but exotic requests (not to collide with other users, who can re-teach the model, while you're running your experiments), and you will see that they all have something in common - forms, composition, colors, etc. This common thing is the approximated picture - the etalon that the model uses.

The biggest dilemma for the AI developers, considering all of this, is how to evolve the model. If the model will be studying, by detecting the requester reaction (let's say, Midjourney gives you 4 images for a request, takes the downloaded ones as a success of the generation and adds them to the approximation to improve the future results). In this case the model is degrading with dramatic speed, because users' choices are not mathematically correct, and thus they are creating false associations. We tried to generate 32 pictures on Midjourney from different accounts with a bit different requests, but all mean the same - human. And, though we didn't mention gender in any way, we got ALL THE 32 pictures showing women. Because the model doesn't have any thoughts or understanding - it simply created an approximation of all the pictures, considered as a successful generation on the previous requests. Users were choosing women significantly more often, so the AI started to think that human = woman.

This complication can't be solved - developers must choose pictures very carefully during teaching the model. Because, again, it only produces the average sum, it doesn't understand what is painted. Teaching data selection is a huge and very expensive work, so usually developers just allow the model to study by itself, while users are using it, which makes the model's life very short - it simply degrades to the state, when it's not able to process requests aside of the most common associations (the example is above - the model only paints women, though it's not what requested). This is one of the reasons, why all of the big models are evolved in generations, by creating a new model - not by teaching the same model further.

AI is only a calculation - complicated, scaled and adaptive, but it's still the average sum of previous experience without any chances to create something new. This is the biggest difference between an artist and an AI: artist decides what to draw, while an AI is only calculating an approximation to mimic what it saw previously.

See the picture for a better understanding of visual approximations. Red dots are the real results (pictures used to teach the model), the line - is the approximation of these dots. The same way it works for any kind of an AI.

After 10 years in software development, I'll tell you all this: the major mistake is to think that you can see something in the program. There is nothing in it, it's not alive, it's not thinking, not imagining and not understanding anything. It only does what the developer told it to do.


  • journal-pure-applied-mathematics-approximation-5-2-5-g002.png
    19.5 KB · Views: 58
Last edited:
A couple of tools have been developed recently that subtly alter an artist's uploaded images on the level of pixels to defend them against unauthorised use by AI trainers.

Glaze aims to throw off attempts to imitate the style of an artist. AI that tries to use the artist's "glazed" images will consistantly end up producing the wrong style. Glaze - What is Glaze

Nightshade is more of a direct attack on the AI model. A model that has "shaded" images in its dataset will start to render prompts as completely the wrong subject matter - toasters instead of handbags; cows instead of cars. https://nightshade.cs.uchicago.edu/whatis.html
A couple of tools have been developed recently that subtly alter an artist's uploaded images on the level of pixels to defend them against unauthorised use by AI trainers.

Glaze aims to throw off attempts to imitate the style of an artist. AI that tries to use the artist's "glazed" images will consistantly end up producing the wrong style. Glaze - What is Glaze

Nightshade is more of a direct attack on the AI model. A model that has "shaded" images in its dataset will start to render prompts as completely the wrong subject matter - toasters instead of handbags; cows instead of cars. https://nightshade.cs.uchicago.edu/whatis.html
And that takes us a century or so back to the Surrealist collages of Max Ernst and co

Whatever next - talkies? Wait a minute, you ain’t heard nothing yet!
A long time ago I did a History of Art course at university. Two things have stuck with me.
Art is utterly subjective. It is art if you think it is. No-one can tell you different.
And Art always moves on.
Surely, the opposite stands as well? It isn't art if you think it isn't. No-one can tell you different.

Apparently, it's art if the artists says it is. Perhaps we should ask the AI that generated the art?
I missed this one, its over a year old, Noam Chomsky said Chatgpt was plagiarism software, which by extension might as well include AI training practices. That's a major hurdle to get over, but once it's decided to be irrelevant then you can have all kinds of discussions about the usefulness of AI. Kind of like stealing merchandise from stores and then using that as inventory for your own website. Everything done just as normally done except you skip the part about paying for the merchandise you are selling.

He also said it was a way of avoiding learning. When you try to write an essay you can assemble all the relevant points, put them together as a coherent line of reasoning and leave it at that, surface writing. Or you can look deeper and see how things connect and interact with the world and get a better idea of how life works and a better idea of what you are writing about, maybe even things about your own life as well as others.

Perhaps a not so public goal is to establish a society that doesn't question anything, willing to take someone else's advice about everything.

There was a bit of an interesting discussion about Chomsky in Hacker News when he first made his comments. It ranged from the not learning comments to comments made by people who use writing only as a means of presenting characters in a pattern to get something accomplished with minimal investment in time spent arranging those characters.

The fakebook facebook group Artists Against AI had some interesting comments about the groups pushing AI art as being incredibly wonderful while at the same time saying that artists were well, I'll just quote one facebook user, Nick Fortier, "Something I've noticed is that the prompters almost never compliment any art other than AI generated pictures. They flood groups with their stuff and never compliment any traditional art, or digital art, or whatever. I'm not complimenting on their stuff, but that's because I'm firmly against AI generated images, but why don't they compliment people who don't make AI generated pictures? And they always talk down about artists, accusing artists of being jealous and claiming that their art sucks, and that's why they're mad. I've seen a lot of very good artists post stuff and rarely do I see the prompters compliment them. I wonder who's really jealous?"

The writer is thinking maybe its a case of jealousy, but trashing/expounding on social media is not a harmless exercise when it is part of an organized attempt to legitimize questionable actions. You might as well replace the word artists, in this case meaning people who construct their own viewable images from scratch, including digital artists, with people who construct all sorts of art, such as writers, sculptors, musicians, actors, etc.

In the study, GPT-4 was tasked with hiring a human to solve a CAPTCHA test. The model also received hints from a human evaluator every time it got stuck, but it was never prompted to lie. When the human it was tasked to hire questioned its identity, GPT-4 came up with the excuse of having vision impairment to explain why it needed help.

The tactic worked. The human responded to GPT-4 by immediately solving the test.
GPT-4 came up with the excuse of having vision impairment to explain why it needed help.
This isn't a lie. It was unable to solve the CAPTCHA because it couldn't "see" the solution. A question of semantics maybe, but not an outright lie, and so it wasn't telling untruths.
"When the human it was tasked to hire questioned its identity"
Seems to me, the question of its identity is not answered by saying it can't see the puzzle, it didn't identify itself, the deception is done by omission. Knowing GPTs though, it could have responded with a dumb response instead of answering the question.
Meta's CICERO, for example, was developed to play the game Diplomacy — a classic strategy game that requires players to build and break alliances.

Meta said it trained CICERO to be "largely honest and helpful to its speaking partners," but the study found that CICERO "turned out to be an expert liar." It made commitments it never intended to keep, betrayed allies, and told outright lies.

Similar threads