Something else to consider is, how does generated art compare to a musical remix? Are they very similar, and if so, does the human who guides the software deserve the same, greater, or less consideration as a music remixer?
Remixing has been around for a while now, and usually doesn't generate the same knee-jerk responses that "AI" does. However, learning systems can't make art on their own. Generally, the operator has to guide the end result through feedback over and over again.
"Make me a painting of Elvis Presley holding a porcupine."
"No, a younger Elvis."
"Now have an elephant walking over a VW beetle in the background."
"Put a squadron of spacecraft flying overhead."
"No, nix the spaceships and add a flight of dragons."
It's not very likely that this results in a work that resembles any previous work, even though it heavily borrows from it.
And although the explosion of artwork from non-artists is likely to be mostly noise, some finite amount will have cultural value, and will be noticed. In being noticed, it will provide feedback to the learning systems, and a sort of evolution will begin. The language models will not be static, so I don't think it will ever stabilize into something dull, any more than the Internet as whole has. (Though you might still not like it.)